Toggle light / dark theme

The recent scandal involving the surveillance of the Associated Press and Fox News by the United States Justice Department has focused attention on the erosion of privacy and freedom of speech in recent years. But before we simply attribute these events to the ethical failings of Attorney General Eric Holder and his staff, we also should consider the technological revolution powering this incident, and thousands like it. It would appear that bureaucrats simply are seduced by the ease with which information can be gathered and manipulated. At the rate that technologies for the collection and fabrication of information are evolving, what is now available to law enforcement and intelligence agencies in the United States, and around the world, will soon be available to individuals and small groups.

We must come to terms with the current information revolution and take the first steps to form global institutions that will assure that our society, and our governments, can continue to function through this chaotic and disconcerting period. The exponential increase in the power of computers will mean that changes the go far beyond the limits of slow-moving human government. We will need to build new institutions to the crisis that are substantial and long-term. It will not be a matter that can be solved by adding a new division to Homeland Security or Google.

We do not have any choice. To make light of the crisis means allowing shadowy organizations to usurp for themselves immense power through the collection and distortion of information. Failure to keep up with technological change in an institutional sense will mean that in the future government will be at best a symbolic façade of authority with little authority or capacity to respond to the threats of information manipulation. In the worst case scenario, corporations and government agencies could degenerate into warring factions, a new form of feudalism in which invisible forces use their control of information to wage murky wars for global domination.

No degree of moral propriety among public servants, or corporate leaders, can stop the explosion of spying and the propagation of false information that we will witness over the next decade. The most significant factor behind this development will be Moore’s Law which stipulates that the number of microprocessors that can be placed economically on a chip will double every 18 months (and the cost of storage has halved every 14 months) — and not the moral decline of citizens. This exponential increase in our capability to gather, store, share, alter and fabricate information of every form will offer tremendous opportunities for the development of new technologies. But the rate of change of computational power is so much faster than the rate at which human institutions can adapt — let alone the rate at which the human species evolves — that we will face devastating existential challenges to human civilization.

The Challenges we face as a result of the Information Revolution

The dropping cost of computational power means that individuals can gather gigantic amounts of information and integrate it into meaningful intelligence about thousands, or millions, of individuals with minimal investment. The ease of extracting personal information from garbage, recordings of people walking up and down the street, taking aerial photographs and combining then with other seemingly worthless material and then organizing it in a meaningful manner will increase dramatically. Facial recognition, speech recognition and instantaneous speech to text will become literally child’s play. Inexpensive, and tiny, surveillance drones will be readily available to collect information on people 24/7 for analysis. My son recently received a helicopter drone with a camera as a present that cost less than $40. In a few years elaborate tracking of the activities of thousands, or millions, of people will become literally child’s play. Continue reading “The Impending Crisis of Data: Do We Need a Constitution of Information?” | >

Prologue:

‘Let there be light,’ said the Cgi-God, and there was light…and God Rays.

We were out in the desert; barren land, and our wish was that it be transformed into a green oasis; a tropical paradise.

And so our demigods went to work in their digital sand-boxes.
Then, one of the Cgi-Gods populated the land with Dirrogates –Digital people in her own likeness.

Welcome to the world… created in Real-time.

A whole generation of people are growing up in such virtual worlds, accustomed to travelling across miles and miles of photo-realistic terrain on their gaming rigs. An entire generation of Transhumans evolving (perhaps even un-known to them). With each passing year, hardware and software under the command of human intelligence, gets even closer to simulating the real-world, down to physics, caustics and other phenomena exclusive to the planet Earth. How is all this voodoo being done?

Enter –the Game Engine.

All output in the video above is in real-time and from a single modern gaming PC. That’s right…in case you missed it, all of the visuals were generated in real-time from a single PC that can sit on a desk. The “engine” behind it, is the CryEngine 3. A far more customized and amped up version of this technology called Cinebox is a dedicated offering aimed at Cinematography. It will have tools and functions that film makers are familiar with. It is these advances in technology… these tools that film-makers will use, that will acclimatize us to the virtual world they build with human performance capture and digital assets; laser scanned pointclouds of real-world architecture… this is the technology that will play its part and segue us into Transhumanism, rather than a radical crusade that will “convert” humanity to the movement.

  • Mind Uploads need a World to roam in:
Laser scanned buildings and even whole neighborhood blocks are now common place in large budget Hollywood productions. A detailed point cloud needs massive compute power to render. Highend Game Engines when daisy chained can render and simulate these large neighborhoods with realtime animated atmosphere, and populate the land with photo-realistic flora and fauna. Lest we forget… in stereoscopic 3D, for full immersion of our visual cortex.

  • Real World Synced Weather:
Game Engines have powerful and advanced TOD (time of day) editors. Now imagine if a TOD editor module and a weather system could pull data such as wind direction, temperature and weather conditions from real-world sensors, or a real-time data source.
If this could be done, then the augmented world running on the Game Engine could have details such as leaves blowing in the correct direction. See the video above at around the 0.42 seconds mark for a feeler of what I’m aiming for.
Also: The stars would all align and there would be no possible errors in the night sky, of the virtual with the real, though there would be nothing stopping “God” from introducing a blue moon in the sky.
At around the 0:20 second mark, the video above shows one of the “Demi-Gods” at work: populating Virtual Earth with exotic trees and forests… mind-candy to keep an uploaded mind from home-sickness. As Transhumans, either as full mind uploads or as augmented humans with bio-mechanical enhancements or indeed, even as naturals, it is expected that we will augment the real world with our dreams of a tropical paradise — Heaven, can indeed be a place on Earth.
Epilogue:

We were tired of our mundane lives in an un-augmented biosphere. As Transhumans, some of us booted up our mind-uploads while yet others ventured out into the desert of the real world in temperature regulated nano-clothing, experiencing a tropical paradise… even as the “naturals” would deny it’s very existence.

Recently, scientists have said we may really be living in a simulation after all. The Mayans stopped counting time not because they predicted Winter Solstice 2012 would be the end of the world… but it might be because they saw 2013 heralding the dawn of a new era. An era that sees the building blocks come into place for a journey heading into eventual…‘Singularity

Dir·ro·gate : A portmanteau of Digital + Surrogate. Borrowed from the novel “Memories with Maya
Authors note: All images, videos and products mentioned are copyright to their respective owners and brands and there is no implied connection between the brands and Transhumanism.

*** PLEASE alert your friends—Our own continued health and longevity may depend on Steve continuing his work.***

This call for support was also posted by Ilia Stambler on the Longevity Alliance Website, and organized on YouCaring.com by John M. Adams. Eric Schulke has also helped tremendously in spreading the word about the Fundraiser.

Since founding the Los Angeles Gerontology Research Group in 1990, Dr. L. Stephen Coles M.D., Ph.D., has worked tirelessly to develop new ways to slow and ultimately reverse human aging.

Everyone active in the LA-GRG or the Worldwide GRG Discussion Group have benefited from his expertise. His continual reporting of news about the latest developments to the List and his work in areas such as gathering blood samples for a complete genome analysis of the oldest people in the world (supercentenarians, aged 110+) is ground breaking and far ahead of anything that has ever been accomplished before. Publication of this work is expected in collaboration with Stanford University before the end of the year. Other accomplishments are equally notable

CLICK HERE TO HELP!

BRIEF summary of his work: L. Stephen Coles, M.D. Ph.D — Cited in more than 250 scientific articles — Profiled as notable person in Wikipedia — Many other contributions to aging research and advancing long, healthy life

Steve Coles was diagnosed with Adenocarcinoma (Pancreatic Cancer) at the head of the pancreas on Christmas Eve of last year. Pancreatic cancer is particularly insidious. He underwent a Whipple (Surgical) Procedure on January 3rd that produced a beneficial result. The tumor’s complete obstruction to the common bile duct that had caused jaundice and severe pruritus (skin itching leading to scratching to the point of bleeding) was almost immediately reversed in two days. His subsequent chemotherapy with Gemzar over the past three months will hopefully prevent metastases from spreading to other organs. But we won’t know his prognosis until June 7th when a CT Scan will be compared with a baseline scan performed before the start of chemo interpreted by a cancer radiologist.

We now have the opportunity to carry out a personalized chemo treatment regimen created by a start-up company called Champions Oncology in Baltimore, MD; USA affiliated with the Johns Hopkins School of Medicine. Champions is a world class organization that will analyze the tissue sample that has already been sent to them. Then, a custom treatment program will be prescribed for Steve based on a mouse model, since each tumor is unique and pure test tube trials have not been shown to be effective.

Champions Oncology’s service is to test in mice what can work for Dr. Coles. This is done through two steps:

(1) To implant Dr. Coles’s cancer on mice. (This part has been successfully carried out, and it will allow us to test nine different treatment protocols on Dr. Coles’s specific tumor tissue in mice).

(2) Test the treatments on the mice (The treatments have been defined with Dr. James P. Watson, Dr. Coles, and his oncologists.)

Dr. Joao Pedro de Magalhes of Liverpool, UK was the first to propose employing the services of Champions Oncology. They have a good track record. The biggest risk is that the process normally takes so long that the patient dies before the results can be obtained (especially with such an aggressive, malignant cancer, as Dr. Coles’s). Luckily, this part went right. Also, there is a risk is that Step-1 won’t work. Luckily for us, this part went right, too. Therefore, so far, it seems that choosing Champions Oncology’s approach was the right choice. We can’t be sure that Step-2 will be as successful, but we need to try.

In addition to his medical team here in the U.S., our international friends have been active on his behalf. They successfully negotiated a 60 percent reduction in cost.

NOW, YOU CAN HELP IN TWO WAYS:

(1) CONTRIBUTE TO THIS FUND

Time is of the essence. The good people at Champions Oncology have agreed to begin the analysis immediately.

Steve Coles needs your support.

It may make THE difference. Please dig deep and support him by contributing to the fund.

*** Our own continued health and longevity may depend on Steve continuing his work.***

(2) SEND REFERRALS TO CHAMPIONS ONCOLOGY

Champions Oncology is an early-stage for-profit company. Champions is not a philanthropy. Like many companies offering breakthrough technologies, it has light bills to pay, payroll to make on time, and many other typical expenses.

Please think of any oncologists how may refer patients to Champions, then contact any of the individuals listed below so we may get life-saving information about Champions into their hands. Champions is particularly well set up to accommodate physicians and patients in the Eastern U.S., Germany, France, Brazil, and Japan.

We wish to acknowledge the GRG (the Gerontology Research Group—A discussion group of ~400 members worldwide.

We owe a special thank you to The International Longevity Alliance Movement for their support.

Contacts:

1. Edouard Debonneuil [email protected] France Skype ID: edebonneuil

2. Daniel Wuttke [email protected] Germany Skype: admiral_atlan

3. Ilia Stambler [email protected] Israel Skype: iliastam

4. John M. (Johnny) Adams [email protected]

U.S. (949) 922‑9786 Skype: agingintervention

Updates 06/03/2013

by John M. (Johnny) Adams

IMPORTANT MESSAGE: Dr. Coles has received a contribution and is forwarding it directly to Champions Oncology.

So as of now, 10:20 am PDT, we have $6175 of the needed $10,000!

I have contacted YouCaring and asked how to change the “$1475 raised of $10000 goal”.

Supporters

Franco Cortese

donated$100.00

Monday, June 03, 2013

PLEASE donate ANYTHING you can to help save the life of L. Stephen Coles, who has spent his entire professional career trying to save yours!

Aubrey de Grey

donated$300.00

Monday, June 03, 2013

Anonymous

Offline Donation

donated$5,000.00

Monday, June 03, 2013

RetirementSingularity.com

donated Hidden Amount

Monday, June 03, 2013

Anonymous

donated Hidden Amount

Monday, June 03, 2013

Sven Bulterijs

donated$15.00

Monday, June 03, 2013

Anonymous

donated Hidden Amount

Sunday, June 02, 2013

kg goldberger

donated$20.00

Sunday, June 02, 2013

prayers are on the way for more than 65% of deaths. Aging is a cause of adult cancer, stroke and many others age related diseases. Researchers fighting aging are the best people, they are fighting for all of us. Let’s pay them back!

Bijan Pourat MD

donated$250.00

Saturday, June 01, 2013

Maxim Kholin

donated Hidden Amount

Saturday, June 01, 2013

Aging is a disease. Aging is responsible

Anonymous

donated$60.00

Saturday, June 01, 2013

Nils Alexander Hizukuri

donated$30.00

Saturday, June 01, 2013

All the best!

Anonymous

donated$40.00

Saturday, June 01, 2013

Danny Bobrow

donated Hidden Amount

Saturday, June 01, 2013

Steve, win this fight for us all. I send you healing thoughts.

Danny Steve, friends and family, but it is an outstanding, real-world example of the advancing frontier of science and medicine. The entire life-extension community should rally in support of this effort for Steve and for the acquisition of important scientific knowledge.

Cliff Hague

donated $100.00

Saturday, June 01, 2013

Best wishes for a speedy recovery.

Tom Coote

donated $100.00

Friday, May 31, 2013

With Best Wishes!

Anonymous

donated$100.00

Friday, May 31, 2013

Allen Taylor

donated$25.00

Friday, May 31, 2013

Gunther Kletetschka

donated Hidden Amount

Friday, May 31, 2013

john mccormack, Australia

donated$50.00

Friday, May 31, 2013

phil kernan

donated$100.00

Friday, May 31, 2013

Gary and Marie Livick

donated$100.00

Friday, May 31, 2013

ingeseim

donated Hidden Amount

Friday, May 31, 2013

TeloMe Inc.

donated$100.00

Friday, May 31, 2013

Not only is this an important cause for

-Preston Estep, Ph.D.

CEO and Chief Scientific Officer, TeloMe, Inc.

Anonymous

donated$5.00

Thursday, May 30, 2013

Anonymous

donated$60.00

Thursday, May 30, 2013

Larry Abrams

donated$100.00

Thursday, May 30, 2013

Anonymous

donated Hidden Amount

Thursday, May 30, 2013

Anonymous

donated Hidden Amount

Thursday, May 30, 2013

Anonymous

donated Hidden Amount

Thursday, May 30, 2013

Anonymous

donated Hidden Amount

Wednesday, May 29, 2013

By Avi Roy, University of Buckingham

In rich countries, more than 80% of the population today will survive past the age of 70. About 150 years ago, only 20% did. In all this while, though, only one person lived beyond the age of 120. This has led experts to believe that there may be a limit to how long humans can live.

Animals display an astounding variety of maximum lifespan ranging from mayflies and gastrotrichs, which live for 2 to 3 days, to giant tortoises and bowhead whales, which can live to 200 years. The record for the longest living animal belongs to the quahog clam, which can live for more than 400 years.

If we look beyond the animal kingdom, among plants the giant sequoia lives past 3000 years, and bristlecone pines reach 5000 years. The record for the longest living plant belongs to the Mediterranean tapeweed, which has been found in a flourishing colony estimated at 100,000 years old.

This jellyfish never dies. Michael W. May

Some animals like the hydra and a species of jellyfish may have found ways to cheat death, but further research is needed to validate this.

The natural laws of physics may dictate that most things must die. But that does not mean we cannot use nature’s templates to extend healthy human lifespan beyond 120 years.

Putting a lid on the can

Gerontologist Leonard Hayflick at the University of California thinks that humans have a definite expiry date. In 1961, he showed that human skin cells grown under laboratory conditions tend to divide approximately 50 times before becoming senescent, which means no longer able to divide. This phenomenon that any cell can multiply only a limited number of times is called the Hayflick limit.

Since then, Hayflick and others have successfully documented the Hayflick limits of cells from animals with varied life spans, including the long-lived Galapagos turtle (200 years) and the relatively short-lived laboratory mouse (3 years). The cells of a Galapagos turtle divide approximately 110 times before senescing, whereas mice cells become senescent within 15 divisions.

The Hayflick limit gained more support when Elizabeth Blackburn and colleagues discovered the ticking clock of the cell in the form of telomeres. Telomeres are repetitive DNA sequence at the end of chromosomes which protects the chromosomes from degrading. With every cell division, it seemed these telomeres get shorter. The result of each shortening was that these cells were more likely to become senescent.

Other scientists used census data and complex modelling methods to come to the same conclusion: that maximum human lifespan may be around 120 years. But no one has yet determined whether we can change the human Hayflick limit to become more like long-lived organisms such as the bowhead whales or the giant tortoise.

What gives more hope is that no one has actually proved that the Hayflick limit actually limits the lifespan of an organism. Correlation is not causation. For instance, despite having a very small Hayflick limit, mouse cells typically divide indefinitely when grown in standard laboratory conditions. They behave as if they have no Hayflick limit at all when grown in the concentration of oxygen that they experience in the living animal (3–5% versus 20%). They make enough telomerase, an enzyme that replaces degraded telomeres with new ones. So it might be that currently the Hayflick “limit” is more a the Hayflick “clock”, giving readout of the age of the cell rather than driving the cell to death.

The trouble with limits

Happy last few days? It doesn’t have to end this way. ptimat

The Hayflick limit may represent an organism’s maximal lifespan, but what is it that actually kills us in the end? To test the Hayflick limit’s ability to predict our mortality we can take cell samples from young and old people and grow them in the lab. If the Hayflick limit is the culprit, a 60-year-old person’s cells should divide far fewer times than a 20-year-old’s cells.

But this experiment fails time after time. The 60-year-old’s skin cells still divide approximately 50 times – just as many as the young person’s cells. But what about the telomeres: aren’t they the inbuilt biological clock? Well, it’s complicated.

When cells are grown in a lab their telomeres do indeed shorten with every cell division and can be used to find the cell’s “expiry date”. Unfortunately, this does not seem to relate to actual health of the cells.

It is true that as we get older our telomeres shorten, but only for certain cells and only during certain time. Most importantly, trusty lab mice have telomeres that are five times longer than ours but their lives are 40 times shorter. That is why the relationship between telomere length and lifespan is unclear.

Apparently using the Hayflick limit and telomere length to judge maximum human lifespan is akin to understanding the demise of the Roman empire by studying the material properties of the Colosseum. Rome did not fall because the Colosseum degraded; quite the opposite in fact, the Colosseum degraded because the Roman Empire fell.

Within the human body, most cells do not simply senesce. They are repaired, cleaned or replaced by stem cells. Your skin degrades as you age because your body cannot carry out its normal functions of repair and regeneration.

To infinity and beyond

If we could maintain our body’s ability to repair and regenerate itself, could we substantially increase our lifespans? This question is, unfortunately, vastly under-researched for us to be able to answer confidently. Most institutes on ageing promote research that delays onset of the diseases of ageing and not research that targets human life extension.

Those that look at extension study how diets like calorie restriction affect human health or the health impacts of molecules like resveratrol derived from red wine. Other research tries to understand the mechanisms underlying the beneficial effects of certain diets and foods with hopes of synthesising drugs that do the same. The tacit understanding in the field of gerontology seems to be that, if we can keep a person healthy longer, we may be able to modestly improve lifespan.

Living long and having good health are not mutually exclusive. On the contrary, you cannot have a long life without good health. Currently most ageing research is concentrated on improving “health”, not lifespan. If we are going to live substantially longer, we need to engineer our way out of the current 120-year-barrier.

Avi Roy does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

Read the original article.

This essay was originally published by the Institute for Ethics & Emerging Technologies

One of the most common anti-Transhumanist tropes one finds recurring throughout Transhumanist rhetoric is our supposedly rampant hubris. Hubris is an ancient Greek concept meaning excess of pride that carries connotations of reckless vanity and heedless self-absorbment, often to the point of carelessly endangering the welfare of others in the process. It paints us in a selfish and dangerous light, as though we were striving for the technological betterment of ourselves alone and the improvement of the human condition solely as it pertains to ourselves, so as to be enhanced relative to the majority of humanity.

In no way is this correct or even salient. I, and the majority of Transhumanists, Techno-Progressives and emerging-tech-enthusiasts I would claim, work toward promoting beneficial outcomes and deliberating the repercussions and most desirable embodiments of radically-transformative technologies for the betterment of all mankind first and foremost, and only secondly for ourselves if at all.

The ired irony of this situation is that the very group who most often hails the charge of Hubris against the Transhumanist community is, according to the logic of hubris, more hubristic than those they rail their charge against. Bio-Luddites, and more generally Neo-Luddites, can be clearly seen to be more self-absorbed and recklessly-selfish than the Transhumanists they are so quick to raise qualms against.

The logic of this conclusion is simple: Transhumanists seek merely to better determine the controlling circumstances and determining conditions of our own selves, whereas Neo-Luddites seek to determine such circumstances and conditions (even if using a negative definition, i.e., the absence of something) not only for everyone besides themselves alive at the moment, but even for the unquantable multitudes of minds and lives still fetal in the future.

We do not seek to radically transform Humanity against their will; indeed, this is so off the mark as to be antithetical to the true Transhumanist impetus — for we seek to liberate their wills, not leash or lash them. We seek to offer every human alive the possibility of transforming themselves more effectively according to their own subjective projected objectives; of actualizing and realizing themselves; ultimately of determining themselves for themselves. We seek to offer every member of Humanity the choice to better choose and the option for more optimal options: the self not as final-subject but as project-at-last.

Neo-Luddites, on the other hand, wish to deny the whole of humanity that choice. They actively seek the determent, relinquishment or prohibition of technological self-transformation, and believe in the heat of their idiot-certainty that they have either the intelligence or the right to force their own preference upon everyone else, present and future. Such lumbering, oafish paternalism patronizes the very essence of Man, whose only right is to write his own and whose only will is to will his own – or at least to vow that he will will his own one fateful yet fate-free day.

We seek solely to choose ourselves, and to give everyone alive and yet-to-live the same opportunity: of choice. Neo-Luddites seek not only to choose for themselves but to force this choice upon everyone else as well.

If any of the original Luddites were alive today, perhaps they would loom large to denounce the contemporary caricature of their own movement and rail their tightly-spooled rage against the modern Neo-Luddites that use Ludd’s name in so reckless a threadbare fashion. At the heart of it they were trying to free their working-class fellowship. There would not have been any predominant connotations of extending the distinguishing features of the Luddite revolt into the entire future, no hint of the possibility that they would set a precedent which would effectively forestall or encumber the continuing advancement of technology at the cost of the continuing betterment of humanity.

Who were they to intimate that continuing technological and methodological growth and progress would continually liberate humanity in fits and bounds of expanding freedom to open up the parameters of their possible actions — would free choice from chance and make the general conditions of being continually better and better? If this sentiment were predominant during 1811–1817, perhaps they would have lain their hammers down. They were seeking the liberation of their people after all; if they knew that their own actions might spawn a future movement seeking to dampen and deter the continual technological liberation of Mankind, perhaps they would have remarked that such future Neo-Luddites missed their point completely.

Perhaps the salient heart of their efforts was not the relinquishment of technology but rather the liberation of their fellow man. Perhaps they would have remarked that while in this particular case technological relinquishment coincided with the liberation of their fellow man, that this shouldn’t be heralded as a hard rule. Perhaps they would have been ashamed of the way in which their name was to be used as the nametag and figurehead for the contemporary fight against liberty and Man’s autonomy. Perhaps Ludd is spinning like a loom in his grave right now.

Does the original Luddites’ enthusiasm for choice and the liberation of his fellow man supersede his revolt against technology? I think it does. The historical continuum of which Transhumanism is but the contemporary leading-tip encompasses not only the technological betterment of self and society but the non-technological as well. Historical Utopian ventures and visions are valid antecedents of the Transhumanist impetus just as Techno-Utopian historical antecedents are. While the emphasis on technology predominant in Transhumanist rhetoric isn’t exactly misplaced (simply because technology is our best means of affecting and changing self and society, whorl and world, and thus our best means of improving it according to subjective projected objectives as well) it isn’t a necessary precondition, and its predominance does not preclude the inclusion of non-technological attempts to improve the human condition as well.

The dichotomy between knowledge and device, between technology and methodology, doesn’t have a stable ontological ground in the first place. What is technology but embodied methodology, and methodology but internalized technology? Language is just as unnatural as quantum computers in geological scales of time. To make technology a necessary prerequisite is to miss the end for the means and the mark for a lark. The point is that we are trying to consciously improve the state of self, society and world; technology has simply superseded methodology as the most optimal means of accomplishing that, and now constitutes our best means of effecting our affectation.

The original Luddite movement was less against advancing technology and more about the particular repercussions that specific advancements in technology (i.e. semi-automated looms) had on their lives and circumstances. To claim that Neo-Luddism has any real continuity-of-impetus with the original Luddite movement that occurred throughout 1811–1817 may actually be antithetical to the real motivation underlying the original Luddite movement – namely the liberation of the working class. Indeed, Neo-Luddism itself, as a movement, may be antithetical to the real impetus of the initial Luddite movement both for the fact that they are trying to impose their ideological beliefs upon others (i.e. prohibition is necessarily exclusive, whereas availability of the option to use a given technology is non-exclusive and forces a decision on no one) and because they are trying to prohibit the best mediator of Man’s ever-increasing self-liberation – namely technological growth.

Support for these claims can be found in the secondary literature. For instance, in Luddites and Luddism Kevin Binfield sees the Luddite movement as an expression of worker-class discontent during the Napoleonic Wars than having rather than as an expression of antipathy toward technology in general or toward advancing technology as general trend (Binfield, 2004).

And in terms of base-premises, it is not as though Luddites are categorically against technology in general; rather they are simply against either a specific technology, a specific embodiment of a general class of technology, or a specific degree of technological sophistication. After all, most every Luddite alive wears clothes, takes antibiotics, and uses telephones. Legendary Ludd himself still wanted the return of his manual looms, a technology, when he struck his first blow. I know many Transhumanists and Technoprogressives who still label themselves as such despite being weary of the increasing trend of automation.

This was the Luddites’ own concern: that automation would displace manual work in their industry and thereby severely limit their possible choices and freedoms, such as having enough discretionary income to purchase necessities. If their government were handing out guaranteed basic income garnered from taxes to corporations based on the degree with which they replace previously-manual labor with automated labor, I’m sure they would have happily lain their hammers down and laughed all the way home. Even the Amish only prohibit specific levels of technological sophistication, rather than all of technology in general.

In other words no one is against technology in general, only particular technological embodiments, particular classes of technology or particular gradations of technological sophistication. If you’d like to contest me on this, try communicating your rebuttal without using the advanced technology of cerebral semiotics (i.e. language).

References.

Binfield, K. (2004). Luddites and Luddism. Baltimore and London: The Johns Hopkins University Press.

The following article was originally published by Immortal Life

When asked what the biggest bottleneck for Radical or Indefinite Longevity is, most thinkers say funding. Some say the biggest bottleneck is breakthroughs and others say it’s our way of approaching the problem (i.e. that we’re seeking healthy life extension whereas we should be seeking more comprehensive methods of indefinite life-extension), but the majority seem to feel that what is really needed is adequate funding to plug away at developing and experimentally-verifying the various, sometimes mutually-exclusive technologies and methodologies that have already been proposed. I claim that Radical Longevity’s biggest bottleneck is not funding, but advocacy.

This is because the final objective of increased funding for Radical Longevity and Life Extension research can be more effectively and efficiently achieved through public advocacy for Radical Life Extension than it can by direct funding or direct research, per unit of time or effort. Research and development obviously still need to be done, but an increase in researchers needs an increase in funding, and an increase in funding needs an increase in the public perception of RLE’s feasibility and desirability.

There is no definitive timespan that it will take to achieve indefinitely-extended life. How long it takes to achieve Radical Longevity is determined by how hard we work at it and how much effort we put into it. More effort means that it will be achieved sooner. And by and large, an increase in effort can be best achieved by an increase in funding, and an increase in funding can be best achieved by an increase in public advocacy. You will likely accelerate the development of Indefinitely-Extended Life, per unit of time or effort, by advocating the desirability, ethicacy and technical feasibility of longer life than you will by doing direct research, or by working towards the objective of directly contributing funds to RLE projects and research initiatives. Continue reading “Longevity’s Bottleneck May Be Funding, But Funding’s Bottleneck is Advocacy & Activism” | >

Just five years ago, anybody who spoke of technological unemployment was labeled a luddite, a techno-utopian, or just simply someone who doesn’t understand economics. Today things are very different – anybody from New York Times columnist Tom Friedman to CBS are jumping on the bandwagon.

Robots-Will-Steal-Your-Job-front

Those of us who have been speaking about the tremendous impact of automation in the workforce know very well that this isn’t a fad about to pass, but that it’s a problem that will only exacerbate in the future. Most of us agree on what the problem is (exponential growth of high-tech replacing humans faster and faster), and we agree that education will play a crucial role (and not coincidentally I started a companyEsplori – precisely to address this problem); but very few seem to suggest that we should use this opportunity to re-think our entire economic system and what the purpose of society should be. I am convinced this is exactly what we need to do. Published in 2012, my book, Robots Will Steal Your Job, But That’s OK: How to Survive the Economic Collapse and Be Happy – which you can also read online for free shows we might go about building a better tomorrow.

We have come to believe that we are dependent on governments and corporations for everything, and now that technology is ever more pervasive, our dependence on them is even stronger. And of course we don’t question the cycle of labor-for-income, income-for-survival and the conspicuous consumption model that has become dominant in virtually every country – and that not only is ecologically unsustainable, but it also generates immense income inequality.

Well, I do. I challenge the assumption that we should live to work, and even that we should work to live, for that matter. In an age where we already produce more than enough food, energy, and drinkable water for 7 billion people with little to no human labour, while 780 million lack access to clean water and 860 million are suffering from chronic hunger, it follows that the system we have in place isn’t allocating resources efficiently. And rather than going back to outdated ideologies (i.e. socialism), we can try new forms of societal structure; starting with open source philosophy, shared knowledge, self-reliance, and sustainable communities.

There are many transitional steps that we can take – reduced workweek, reform patent and copyright laws, switch to distributed and renewable energies – and there will be bumps along the road, no doubt. But if we move in the right direction, if we are ready to abandon ideologies and stick to whatever works best, I think we will prevail – simply because we will realise that there is no war other than the one we are fighting with ourselves.

This essay was also published by the Institute for Ethics & Emerging Technologies and by Transhumanity under the title “Is Price Performance the Wrong Measure for a Coming Intelligence Explosion?”.

Introduction

Most thinkers speculating on the coming of an intelligence explosion (whether via Artificial-General-Intelligence or Whole-Brain-Emulation/uploading), such as Ray Kurzweil [1] and Hans Moravec [2], typically use computational price performance as the best measure for an impending intelligence explosion (e.g. Kurzweil’s measure is when enough processing power to satisfy his estimates for basic processing power required to simulate the human brain costs $1,000). However, I think a lurking assumption lies here: that it won’t be much of an explosion unless available to the average person. I present a scenario below that may indicate that the imminence of a coming intelligence-explosion is more impacted by basic processing speed – or instructions per second (ISP), regardless of cost or resource requirements per unit of computation, than it is by computational price performance. This scenario also yields some additional, counter-intuitive conclusions, such as that it may be easier (for a given amount of “effort” or funding) to implement WBE+AGI than it would be to implement AGI alone – or rather that using WBE as a mediator of an increase in the rate of progress in AGI may yield an AGI faster or more efficiently per unit of effort or funding than it would be to implement AGI directly.

Loaded Uploads:

Petascale supercomputers in existence today exceed the processing-power requirements estimated by Kurzweil, Moravec, and Storrs-Hall [3]. If a wealthy individual were uploaded onto an petascale supercomputer today, they would have the same computational resources as the average person would eventually have in 2019 according to Kurzweil’s figures, when computational processing power equal to the human brain, which he estimates at 20 quadrillion calculations per second. While we may not yet have the necessary software to emulate a full human nervous system, the bottleneck for being able to do so is progress in the field or neurobiology rather than software performance in general. What is important is that the raw processing power estimated by some has already been surpassed – and the possibility of creating an upload may not have to wait for drastic increases in computational price performance.

The rate of signal transmission in electronic computers has been estimated to be roughly 1 million times as fast as the signal transmission speed between neurons, which is limited to the rate of passive chemical diffusion. Since the rate of signal transmission equates with subjective perception of time, an upload would presumably experience the passing of time one million times faster than biological humans. If Yudkowsky’s observation [4] that this would be the equivalent to experiencing all of history since Socrates every 18 “real-time” hours is correct then such an emulation would experience 250 subjective years for every hour and 4 years a minute. A day would be equal to 6,000 years, a week would be equal to 1,750 years, and a month would be 75,000 years.

Moreover, these figures use the signal transmission speed of current, electronic paradigms of computation only, and thus the projected increase in signal-transmission speed brought about through the use of alternative computational paradigms, such as 3-dimensional and/or molecular circuitry or Drexler’s nanoscale rod-logic [5], can only be expected to increase such estimates of “subjective speed-up”.

The claim that the subjective perception of time and the “speed of thought” is a function of the signal-transmission speed of the medium or substrate instantiating such thought or facilitating such perception-of-time follows from the scientific-materialist (a.k.a. metaphysical-naturalist) claim that the mind is instantiated by the physical operations of the brain. Thought and perception of time (or the rate at which anything is perceived really) are experiential modalities that constitute a portion of the brain’s cumulative functional modalities. If the functional modalities of the brain are instantiated by the physical operations of the brain, then it follows that increasing the rate at which such physical operations occur would facilitate a corresponding increase in the rate at which such functional modalities would occur, and thus the rate at which the experiential modalities that form a subset of those functional modalities would likewise occur.

Petascale supercomputers have surpassed the rough estimates made by Kurzweil (20 petaflops, or 20 quadrillion calculations per second), Moravec (100,000 MIPS), and others. Most argue that we still need to wait for software improvements to catch up with hardware improvements. Others argue that even if we don’t understand how the operation of the brain’s individual components (e.g. neurons, neural clusters, etc.) converge to create the emergent phenomenon of mind – or even how such components converge so as to create the basic functional modalities of the brain that have nothing to do with subjective experience – we would still be able to create a viable upload. Nick Bostrom & Anders Sandberg, in their 2008 Whole Brain Emulation Roadmap [6] for instance, have argued that if we understand the operational dynamics of the brain’s low-level components, we can then computationally emulate such components and the emergent functional modalities of the brain and the experiential modalities of the mind will emerge therefrom.

Mind Uploading is (Largely) Independent of Software Performance:

Why is this important? Because if we don’t have to understand how the separate functions and operations of the brain’s low-level components converge so as to instantiate the higher-level functions and faculties of brain and mind, then we don’t need to wait for software improvements (or progress in methodological implementation) to catch up with hardware improvements. Note that for the purposes of this essay “software performance” will denote the efficacy of the “methodological implementation” of an AGI or Upload (i.e. designing the mind-in-question, regardless of hardware or “technological implementation” concerns) rather than how optimally software achieves its effect(s) for a given amount of available computational resources.

This means that if the estimates for sufficient processing power to emulate the human brain noted above are correct then a wealthy individual could hypothetically have himself destructively uploaded and run on contemporary petascale computers today, provided that we can simulate the operation of the brain at a small-enough scale (which is easier than simulating components at higher scales; simulating the accurate operation of a single neuron is less complex than simulating the accurate operation of higher-level neural networks or regions). While we may not be able to do so today due to lack of sufficient understanding of the operational dynamics of the brain’s low-level components (and whether the models we currently have are sufficient is an open question), we need wait only for insights from neurobiology, and not for drastic improvements in hardware (if the above estimates for required processing-power are correct), or in software/methodological-implementation.

If emulating the low-level components of the brain (e.g. neurons) will give rise to the emergent mind instantiated thereby, then we don’t actually need to know “how to build a mind” – whereas we do in the case of an AGI (which for the purposes of this essay shall denote AGI not based off of the human or mammalian nervous system, even though an upload might qualify as an AGI according to many people’s definitions). This follows naturally from the conjunction of the premises that 1. the system we wish to emulate already exists and 2. we can create (i.e. computationally emulate) the functional modalities of the whole system by only understanding the operation of the low level-level components’ functional modalities.

Thus, I argue that a wealthy upload who did this could conceivably accelerate the coming of an intelligence explosion by such a large degree that it could occur before computational price performance drops to a point where the basic processing power required for such an emulation is available for a widely-affordable price, say for $1,000 as in Kurzweil’s figures.

Such a scenario could make basic processing power, or Instructions-Per-Second, more indicative of an imminent intelligence explosion or hard take-off scenario than computational price performance.

If we can achieve human whole-brain-emulation even one week before we can achieve AGI (the cognitive architecture of which is not based off of the biological human nervous system) and this upload set to work on creating an AGI, then such an upload would have, according to the “subjective-speed-up” factors given above, 1,750 subjective years within which to succeed in designing and implementing an AGI, for every one real-time week normatively-biological AGI workers have to succeed.

The subjective-perception-of-time speed-up alone would be enough to greatly improve his/her ability to accelerate the coming of an intelligence explosion. Other features, like increased ease-of-self-modification and the ability to make as many copies of himself as he has processing power to allocate to, only increase his potential to accelerate the coming of an intelligence explosion.

This is not to say that we can run an emulation without any software at all. Of course we need software – but we may not need drastic improvements in software, or a reinventing of the wheel in software design

So why should we be able to simulate the human brain without understanding its operational dynamics in exhaustive detail? Are there any other processes or systems amenable to this circumstance, or is the brain unique in this regard?

There is a simple reason for why this claim seems intuitively doubtful. One would expect that we must understand the underlying principles of a given technology’s operation in in order to implement and maintain it. This is, after all, the case for all other technologies throughout the history of humanity. But the human brain is categorically different in this regard because it already exists.

If, for instance, we found a technology and wished to recreate it, we could do so by copying the arrangement of components. But in order to make any changes to it, or any variations on its basic structure or principals-of-operation, we would need to know how to build it, maintain it, and predictively model it with a fair amount of accuracy. In order to make any new changes, we need to know how such changes will affect the operation of the other components – and this requires being able to predictively model the system. If we don’t understand how changes will impact the rest of the system, then we have no reliable means of implementing any changes.

Thus, if we seek only to copy the brain, and not to modify or augment it in any substantial way, the it is wholly unique in the fact that we don’t need to reverse engineer it’s higher-level operations in order to instantiate it.

This approach should be considered a category separate from reverse-engineering. It would indeed involve a form of reverse-engineering on the scale we seek to simulate (e.g. neurons or neural clusters), but it lacks many features of reverse-engineering by virtue of the fact that we don’t need to understand its operation on all scales. For instance, knowing the operational dynamics of the atoms composing a larger system (e.g. any mechanical system) wouldn’t necessarily translate into knowledge of the operational dynamics of its higher-scale components. The approach mind-uploading falls under, where reverse-engineering at a small enough scale is sufficient to recreate it, provided that we don’t seek to modify its internal operation in any significant way, I will call Blind Replication.

Blind replication disallows any sort of significant modifications, because if one doesn’t understand how processes affect other processes within the system then they have no way of knowing how modifications will change other processes and thus the emergent function(s) of the system. We wouldn’t have a way to translate functional/optimization objectives into changes made to the system that would facilitate them. There are also liability issues, in that one wouldn’t know how the system would work in different circumstances, and would have no guarantee of such systems’ safety or their vicarious consequences. So government couldn’t be sure of the reliability of systems made via Blind Replication, and corporations would have no way of optimizing such systems so as to increase a given performance metric in an effort to increase profits, and indeed would be unable to obtain intellectual property rights over a technology that they cannot describe the inner-workings or “operational dynamics” of.

However, government and private industry wouldn’t be motivated by such factors (that is, ability to optimize certain performance measures, or to ascertain liability) in the first place, if they were to attempt something like this – since they wouldn’t be selling it. The only reason I foresee government or industry being interested in attempting this is if a foreign nation or competitor, respectively, initiated such a project, in which case they might attempt it simply to stay competitive in the case of industry and on equal militaristic defensive/offensive footing in the case of government. But the fact that optimization-of-performance-measures and clear liabilities don’t apply to Blind Replication means that a wealthy individual would be more likely to attempt this, because government and industry have much more to lose in terms of liability, were someone to find out.

Could Upload+AGI be easier to implement than AGI alone?

This means that the creation of an intelligence with a subjective perception of time significantly greater than unmodified humans (what might be called Ultra-Fast Intelligence) may be more likely to occur via an upload, rather than an AGI, because the creation of an AGI is largely determined by increases in both computational processing and software performance/capability, whereas the creation of an upload may be determined by-and-large by processing-power and thus remain largely independent of the need for significant improvements in software performance or “methodological implementation”

If the premise that such an upload could significantly accelerate a coming intelligence explosion (whether by using his/her comparative advantages to recursively self-modify his/herself, to accelerate innovation and R&D in computational hardware and/or software, or to create a recursively-self-improving AGI) is taken as true, it follows that even the coming of an AGI-mediated intelligence explosion specifically, despite being impacted by software improvements as well as computational processing power, may be more impacted by basic processing power (e.g. IPS) than by computational price performance — and may be more determined by computational processing power than by processing power + software improvements. This is only because uploading is likely to be largely independent of increases in software (i.e. methodological as opposed to technological) performance. Moreover, development in AGI may proceed faster via the vicarious method outlined here – namely having an upload or team of uploads work on the software and/or hardware improvements that AGI relies on – than by directly working on such improvements in “real-time” physicality.

Virtual Advantage:

The increase in subjective perception of time alone (if Yudkowsky’s estimate is correct, a ratio of 250 subjective years for every “real-time” hour) gives him/her a massive advantage. It also would likely allow them to counter-act and negate any attempts made from “real-time” physicality to stop, slow or otherwise deter them.

There is another feature of virtual embodiment that could increase the upload’s ability to accelerate such developments. Neural modification, with which he could optimize his current functional modalities (e.g. what we coarsely call “intelligence”) or increase the metrics underlying them, thus amplifying his existing skills and cognitive faculties (as in Intelligence Amplification or IA), as well as creating categorically new functional modalities, is much easier from within virtual embodiment than it would be in physicality. In virtual embodiment, all such modifications become a methodological, rather than technological, problem. To enact such changes in a physically-embodied nervous system would require designing a system to implement those changes, and actually implementing them according to plan. To enact such changes in a virtually-embodied nervous system requires only a re-organization or re-writing of information. Moreover, in virtual embodiment, any changes could be made, and reversed, whereas in physical embodiment reversing such changes would require, again, designing a method and system of implementing such “reversal-changes” in physicality (thereby necessitating a whole host of other technologies and methodologies) – and if those changes made further unexpected changes, and we can’t easily reverse them, then we may create an infinite regress of changes, wherein changes made to reverse a given modification in turn creates more changes, that in turn need to be reversed, ad infinitum.

Thus self-modification (and especially recursive self-modification), towards the purpose of intelligence amplification into Ultraintelligence [7] in easier (i.e. necessitating a smaller technological and methodological infrastructure – that is, the required host of methods and technologies needed by something – and thus less cost as well) in virtual embodiment than in physical embodiment.

These recursive modifications not only further maximize the upload’s ability to think of ways to accelerate the coming of an intelligence explosion, but also maximize his ability to further self-modify towards that very objective (thus creating the positive feedback loop critical for I.J Good’s intelligence explosion hypothesis) – or in other words maximize his ability to maximize his general ability in anything.

But to what extent is the ability to self-modify hampered by the critical feature of Blind Replication mentioned above – namely, the inability to modify and optimize various performance measures by virtue of the fact that we can’t predictively model the operational dynamics of the system-in-question? Well, an upload could copy himself, enact any modifications, and see the results – or indeed, make a copy to perform this change-and-check procedure. If the inability to predictively model a system made through the “Blind Replication” method does indeed problematize the upload’s ability to self-modify, it would still be much easier to work towards being able to predictively model it, via this iterative change-and-check method, due to both the subjective-perception-of-time speedup and the ability to make copies of himself.

It is worth noting that it might be possible to predictively model (and thus make reliable or stable changes to) the operation of neurons, without being able to model how this scales up to the operational dynamics of the higher-level neural regions. Thus modifying, increasing or optimizing existing functional modalities (i.e. increasing synaptic density in neurons, or increasing the range of usable neurotransmitters — thus increasing the potential information density in a given signal or synaptic-transmission) may be significantly easier than creating categorically new functional modalities.

Increasing the Imminence of an Intelligent Explosion:

So what ways could the upload use his/her new advantages and abilities to actually accelerate the coming of an intelligence explosion? He could apply his abilities to self-modification, or to the creation of a Seed-AI (or more technically a recursively self-modifying AI).

He could also accelerate its imminence vicariously by working on accelerating the foundational technologies and methodologies (or in other words the technological and methodological infrastructure of an intelligence explosion) that largely determine its imminence. He could apply his new abilities and advantages to designing better computational paradigms, new methodologies within existing paradigms (e.g. non-Von-Neumann architectures still within the paradigm of electrical computation), or to differential technological development in “real-time” physicality towards such aims – e.g. finding an innovative means of allocating assets and resources (i.e. capital) to R&D for new computational paradigms, or optimizing current computational paradigms.

Thus there are numerous methods of indirectly increasing the imminence (or the likelihood of imminence within a certain time-range, which is a measure with less ambiguity) of a coming intelligence explosion – and many new ones no doubt that will be realized only once such an upload acquires such advantages and abilities.

Intimations of Implications:

So… Is this good news or bad news? Like much else in this increasingly future-dominated age, the consequences of this scenario remain morally ambiguous. It could be both bad and good news. But the answer to this question is independent of the premises – that is, two can agree on the viability of the premises and reasoning of the scenario, while drawing opposite conclusions in terms of whether it is good or bad news.

People who subscribe to the “Friendly AI” camp of AI-related existential risk will be at once hopeful and dismayed. While it might increase their ability to create their AGI (or more technically their Coherent-Extrapolated-Volition Engine [8]), thus decreasing the chances of an “unfriendly” AI being created in the interim, they will also be dismayed by the fact that it may include (but not necessitate) a recursively-modifying intelligence, in this case an upload, to be created prior to the creation of their own AGI – which is the very problem they are trying to mitigate in the first place.

Those who, like me, see a distributed intelligence explosion (in which all intelligences are allowed to recursively self-modify at the same rate – thus preserving “power” equality, or at least mitigating “power” disparity [where power is defined as the capacity to affect change in the world or society] – and in which any intelligence increasing their capably at a faster rate than all others is disallowed) as a better method of mitigating the existential risk entailed by an intelligence explosion will also be dismayed. This scenario would allow one single person to essentially have the power to determine the fate of humanity – due to his massively increased “capability” or “power” – which is the very feature (capability disparity/inequality) that the “distributed intelligence explosion” camp of AI-related existential risk seeks to minimize.

On the other hand, those who see great potential in an intelligence explosion to help mitigate existing problems afflicting humanity – e.g. death, disease, societal instability, etc. – will be hopeful because the scenario could decrease the time it takes to implement an intelligence explosion.

I for one think that it is highly likely that the advantages proffered by accelerating the coming of an intelligence explosion fail to supersede the disadvantages incurred by the increase existential risk it would entail. That is, I think that the increase in existential risk brought about by putting so much “power” or “capability-to-affect-change” in the (hands?) one intelligence outweighs the decrease in existential risk brought about by the accelerated creation of an Existential-Risk-Mitigating A(G)I.

Conclusion:

Thus, the scenario presented above yields some interesting and counter-intuitive conclusions:

  1. How imminent an intelligence explosion is, or how likely it is to occur within a given time-frame, may be more determined by basic processing power than by computational price performance, which is a measure of basic processing power per unit of cost. This is because as soon as we have enough processing power to emulate a human nervous system, provided we have sufficient software to emulate the lower level neural components giving rise to the higher-level human mind, then the increase in the rate of thought and subjective perception of time made available to that emulation could very well allow it to design and implement an AGI before computational price performance increases by a large enough factor to make the processing power necessary for that AGI’s implementation available for a widely-affordable cost. This conclusion is independent of any specific estimates of how long the successful computational emulation of a human nervous system will take to achieve. It relies solely on the premise that the successful computational emulation of the human mind can be achieved faster than the successful implementation of an AGI whose design is not based upon the cognitive architecture of the human nervous system. I have outlined various reasons why we might expect this to be the case. This would be true even if uploading could only be achieved faster than AGI (given an equal amount of funding or “effort”) by a seemingly-negligible amount of time, like one week, due to the massive increase in speed of thought and the rate of subjective perception of time that would then be available to such an upload.
  2. The creation of an upload may be relatively independent of software performance/capability (which is not to say that we don’t need any software, because we do, but rather that we don’t need significant increases in software performance or improvements in methodological implementation – i.e. how we actually design a mind, rather than the substrate it is instantiated by – which we do need in order to implement an AGI and which we would need for WBE, were the system we seek to emulate not already in existence) and may in fact be largely determined by processing power or computational performance/capability alone, whereas AGI is dependent on increases in both computational performance and software performance or fundamental progress in methodological implementation.
    • If this second conclusion is true, it means that an upload may be possible quite soon considering the fact that we’ve passed the basic estimates for processing requirements given by Kurzweil, Moravec and Storrs-Hall, provided we can emulate the low-level neural regions of the brain with high predictive accuracy (and provided the claim that instantiating such low-level components will vicariously instantiate the emergent human mind, without out needing to really understand how such components functionally-converge to do so, proves true), whereas AGI may still have to wait for fundamental improvements to methodological implementation or “software performance”
    • Thus it may be easier to create an AGI by first creating an upload to accelerate the development of that AGI’s creation, than it would be to work on the development of an AGI directly. Upload+AGI may actually be easier to implement than AGI alone is!

franco 2 essay 5

References:

[1] Kurzweil, R, 2005. The Singularity is Near. Penguin Books.

[2] Moravec, H, 1997. When will computer hardware match the human brain?. Journal of Evolution and Technology, [Online]. 1(1). Available at: http://www.jetpress.org/volume1/moravec.htm [Accessed 01 March 2013].

[3] Hall, J (2006) “Runaway Artificial Intelligence?” Available at: http://www.kurzweilai.net/runaway-artificial-intelligence [Accessed: 01 March 2013]

[4] Adam Ford. (2011). Yudkowsky vs Hanson on the Intelligence Explosion — Jane Street Debate 2011 . [Online Video]. August 10, 2011. Available at: http://www.youtube.com/watch?v=m_R5Z4_khNw [Accessed: 01 March 2013].

[5] Drexler, K.E, (1989). MOLECULAR MANIPULATION and MOLECULAR COMPUTATION. In NanoCon Northwest regional nanotechnology conference. Seattle, Washington, February 14–17. NANOCON. 2. http://www.halcyon.com/nanojbl/NanoConProc/nanocon2.html [Accessed 01 March 2013]

[6] Sandberg, A. & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008–3. http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf [Accessed 01 March 2013]

[7] Good, I.J. (1965). Speculations Concerning the First Ultraintelligent Machine. Advances in Computers.

[8] Yudkowsky, E. (2004). Coherent Extrapolated Volition. The Singularity Institute.

Dirrogate_fundawear_memories_with_maya

Emotions and Longevity:

If the picture header above influenced you to click to read more of this article, then it establishes at least part of my hypothesis: Visual stimuli that trigger our primal urges, supersede all our senses, even over-riding intellect. By that I mean, irrespective of IQ level, the visual alone and not the title of the essay will have prompted a click through –Classic advertising tactic: Sex sells.

Yet, could there be a clue in this behavior to study further, in our quest for Longevity? Before Transhumanism life extension technology such as nano-tech and bio-tech go mainstream… we need to keep our un-amped bodies in a state of constant excitement, using visual triggers that generate positive emotions, thereby hopefully, keeping us around long enough to take advantage of these bio-hacks when they become available.

dirrogate_emotion_transhumanism

Emotions on Demand — The “TiVo-ing” of feelings:

From the graphic above, it is easy to extrapolate that ‘positive’ emotions can contribute significantly to Longevity. When we go on a vacation, we’re experiencing the world in a relaxed frame of mind and encoding these experiences, even if sub-consciously, in our brains (minds?). Days, or even years later we can call on these experiences, on-demand, to bring us comfort.

Granted, much like analog recordings… over time, these stored copies of positive emotions will deteriorate, and just as we can today digitize images and sounds, making for pristine everlasting copies… can we digitize Emotions for recall and to experience them on-demand?

How would we go about doing it and what purpose does it serve?

durex_fundawear_dirrogate_sex

Digitizing Touch: Your Dirrogate’s unique Emotional Signature:

Can we digitize Touch; a crucial building block that contributes to the creation of Emotions? For an answer, we need to look to the (and to some, the questionable) technology behind Teledildonics.

While the tech to experience haptic feed-back has been around for a while, it’s been mostly confined to Virtual Reality simulations and for training purposes. Crude haptic-force feedback gaming controllers are available on the market, but advances in actuators, and nano-scale miniaturization are soon to change that, even going as far as to give us tactile imaging capability — “Smart Skin

Recently, Durex announced “Fundawear”. It’s purpose? To experience the “touch” of your partner in a fun light-hearted way. Yet, what if a Fundawear session could be recorded and played back later? The unique way your partner touches, forever digitized for playback when desired… allowing you to experience the emotion of joy and happiness at will?

Fundawear can be thought of as a beta v1.0 of something akin to smart-skin in reverse, which could eventually allow a complete “feel-stream” to be digitized and played back on-demand.

Currently we are already able to digitize some faculties that stimulate two of our primary senses:

  • Sight — via a video camera.
  • Sound — via microphones.

So how do we go about digitizing and re-creating the sense of Touch?

Solutions such as the one from NuiCapture shown in the video above, in combination with off the shelf game hardware such as the Kinect, can Digitize a whole body “performance” — Also known as performance capture.

Dirrogates and 3D Printing a Person:

In the near future if we get blue-prints to 3D print a person, ready for re-animation and complete with “smart-skin”… such a 3D printed surrogate could reciprocate our touch.

It would be an exercise in imagination, to envision 3D printing your partner, if they couldn’t be with you when you wanted them, or indeed it could raise moral and ethical issues such as ‘adultery’ if an un-authorized 3D printed copy was produced of a person, and their “signature” performance files was pirated.

But with every evil, there is also the good. 3D printers can print guns, or as seen in the video above: a prosthetic hand, allowing a child to experience life the way other children do — That is the ethos of Transhumanism.

3d-tv-family-conference

Loneliness can kill you:

Well maybe not exactly kill you, but it can negatively impact your health, says The World of Psychology. That would be counterproductive in our quest for Longevity.

A few years ago, companies such as Accenture introduced family collaboration projects. I recommend clicking on the link to read the article, as copyright restrictions prevent including it in this essay. In essence, it allows older relatives to derive emotional comfort from seeing and interacting with their families living miles away.

At a very basic level, we are already Transhuman. No stigma involved… no religious boundaries crossed. This ethical use of technology, can bring comfort to an aging section of society, bettering their condition.

In a relationship, the loss of a loved one can be devastating to the surviving partner, even more so, if the couple had grown old together and shared their good and bad times. Experiencing and re-living memories that transcend photographs and videos, could contribute towards generating positive emotions and thus longevity in the person coping with his/her loss.

While 3D printing and re-animating a person is still a few years away, there is another stop-gap technology: Augmented Reality. With AR visors, we can see and interact with a “Dirrogate” (Digital Surrogate) of another person as though they were in the same room with us. The person’s Dirrogate can be operated in real-time by another person living thousands of miles away… or a digitized touch stream can be called on… long after the human operator is no more.

In the story: “Memories with Maya”, the context and it’s repercussion on our evolution into a Transhuman species, is explored in more detail.

The purpose of this essay is to seed ideas only, and is not to be taken as expert advice.

boy_bubble2

There is a real power in the act of physically moving. In so doing, each and every morning I can escape the cacophonous curse of the ubiquitous ESPN in the gym locker room. I toss my bag in my locker and immediately escape to the pure, perfect, custom designed peace of my iPod’s audio world. I also well remember the glorious day I moved away from the hopelessness of my roommate’s awful sub-human, sub-slum stench and into my own private apartment. The universe changed miraculously overnight. I think you can get my drift. The simple act of moving itself can be powerfully transformational. Sometimes, there is not enough bleach and not enough distance between the walls to have the desired effect. Physically moving is quite often the only answer.

As we consider transhumanist societies, such transitional power is certainly the result by many magnitudes. My team has been engaged in developing the first permanent human undersea settlement over the past few decades. In this process we have had the distinct advantage of planning profoundly transhumanist advances specifically because of the advantageous context of relative community isolation. Further we have the benefit of deriving change as a community necessity — as a psychological and cultural imperative for this degree of advanced cultural evolution. It is a real kind of powerfully driven societal punctuated equilibrium that can be realized in few other ways.

In moving into the oceans, the submarine environment itself immediately establishes the boundary between the new, evolving culture and the old. While the effect and actual meaning of this boundary is almost always overrated, it is nonetheless a real boundary layer that allows the new culture to flourish sans the interferences or contamination from the old. Trying to accomplish transhumanist goals while culturally embedded is far more difficult and far less persuasive to those who must undergo dramatic change and for the transformation to actually take hold and survive generationally. But in a new, rather isolated environment, the pressure to evolve and integrate permanent change is not only easier, it is rather expected as a part of the reasonable process of establishment.

In one of our most powerful spin-offs back to the land-dwellers (LDs), our culture will begin on day one as a ‘waste-free culture’. It is an imperative and therefore a technological design feature. It is a value system. It is codified. It is a defining element of our new culture. It is also radically transhumanist. In our society, we teach this to one another and to our children, as well as every subsequent generation. In our undersea culture we have a process called ‘resource recovery’, since every product of every process is a resource to be utilized in the next round of community life cycle processing. Hence even the vilest sewage is just a part of the carbon cycle for the next round of our life support system engineering. Nothing is to be ‘wasted’. Nothing is to be ‘cast off’. We cannot afford ‘waste’ of any kind, hence waste will cease to exist as a concept. Everything is a resource. The life of the next cycle depends on the successful re-integration of each preceding cycle. The future life and wellbeing of the colony directly depends on the successful implementation of the conservation of resources and in turn the preservation of the natural health of its immediate environment in just this fashion.

Such advancement would be most difficult to engineer in a land-dweller community. The first problem would be simple re-education and the most elementary expectations. The next hurdle would be the re-engineering of every process that the LDs now identify as ‘waste processing’, ‘waste storage’, ‘waste distribution’ etc. Sadly, much of the LD’s unprocessed and unstabilized product is dumped into our ocean environment! But in the simple act of moving the same people to a new social structure, the impossible becomes surprisingly straightforward and even easy to implement. The difference and the power were always implicit in the move itself. The transhumanist ideal seems much better framed in this context when one considers that this is only one of countless examples of building new societies that are cleanly separated from the old.

It is certain to engender arguments to the contrary, I am sure. For how often is the rare opportunity available to move into a new cultural paradigm cleanly distinct from its predecessor? Certainly then the transhumanist concept must be able to rely on in situ prototypes that must be ultimately successful for the successful evolution of the culture. I have no argument with this, except to emphasize the intrinsic power in clean cultural separation as described in this example.

Obviously the ocean settlement is only one prototype. Space settlements and surface based seasteading are other examples to consider. The fact is clear, transhumanist cultures will always and quite easily develop in the new isolated human communities that are about to flourish in the most unexpected of places.

_________________________________________
Dennis Chamberland is the Expeditions Leader for the Atlantica Expeditions, where others may participate. Dennis is also a writer, the author of the book, Undersea Colonies and others, where many of these concepts are discussed in greater detail.