Toggle light / dark theme

It is interesting to note that the technical possibility to send interstellar Ark appeared in 1960th, and is based on the concept of “Blust-ship” of Ulam. This blast-ship uses the energy of nuclear explosions to move forward. Detailed calculations were carried out under the project “Orion”. http://en.wikipedia.org/wiki/Project_Orion_(nuclear_propulsion) In 1968 Dyson published an article “Interstellar Transport”, which shows the upper and lower bounds of the projects. In conservative (ie not imply any technical achievements) valuation it would cost 1 U.S. GDP (600 billion U.S. dollars at the time of writing) to launch the spaceship with mass of 40 million tonnes (of which 5 million tons of payload), and its time of flight to Alpha Centauri would be 1200 years. In a more advanced version the price is 0.1 U.S. GDP, the flight time is 120 years and starting weight 150 000 tons (of which 50 000 tons of payload). In principle, using a two-tier scheme, more advanced thermonuclear bombs and reflectors the flying time to the nearest star can reduce to 40 years.
Of course, the crew of the spaceship is doomed to extinction if they do not find a habitable and fit for human planet in the nearest star system. Another option is that it will colonize uninhabited planet. In 1980, R. Freitas proposed a lunar exploration using self-replicating factory, the original weight of 100 tons, but to control that requires artificial intelligence. “Advanced Automation for Space Missions” http://www.islandone.org/MMSG/aasm/ Artificial intelligence yet not exist, but the management of such a factory could be implemented by people. The main question is how much technology and equipment should be enough to throw at the moonlike uninhabited planet, so that people could build on it completely self-sustaining and growing civilization. It is about creating something like inhabited von Neumann probe. Modern self-sustaining state includes at least a few million people (like Israel), with hundreds of tons of equipment on each person, mainly in the form of houses, roads. Weight of machines is much smaller. This gives us the upper boundary of the able to replicate human colony in the 1 billion tons. The lower estimate is that there would be about 100 people, each of which accounts for approximately 100 tons (mainly food and shelter), ie 10 000 tons of mass. A realistic assessment should be somewhere in between, and probably in the tens of millions of tons. All this under the assumption that no miraculous nanotechnology is not yet open.
The advantage of a spaceship as Ark is that it is non-specific reaction to a host of different threats with indeterminate probabilities. If you have some specific threat (the asteroid, the epidemic), then there is better to spend money on its removal.
Thus, if such a decision in the 1960th years were taken, now such a ship could be on the road.
But if we ignore the technical side of the issue, there are several trade-offs on strategies for creating such a spaceship.
1. The sooner such a project is started, the lesser technically advanced it would be, the lesser would be its chances of success and higher would be cost. But if it will be initiated later, the greater would be chances that it will not be complete until global catastrophe.
2. The later the project starts, the greater are the chance that it will take “diseases” of mother civilization with it (e.g. ability to create dangerous viruses ).
3. The project to create a spaceship could lead to the development of technologies that threaten civilization itself. Blast-ship used as fuel hundreds of thousands of hydrogen bombs. Therefore, it can either be used as a weapon, or other party may be afraid of it and respond. In addition, the spaceship can turn around and hit the Earth, as star-hammer — or there maybe fear of it. During construction of the spaceship could happen man-made accidents with enormous consequences, equal as maximum to detonation of all bombs on board. If the project is implementing by one of the countries in time of war, other countries could try to shoot down the spaceship when it launched.
4. The spaceship is a means of protection against Doomsday machine as strategic response in Khan style. Therefore, the creators of such a Doomsday machine can perceive the Ark as a threat to their power.
5. Should we implement a more expensive project, or a few cheaper projects?
6. Is it sufficient to limit the colonization to the Moon, Mars, Jupiter’s moons or objects in the Kuiper belt? At least it can be fallback position at which you can check the technology of autonomous colonies.
7. The sooner the spaceship starts, the less we know about exoplanets. How far and how fast the Ark should fly in order to be in relative safety?
8. Could the spaceship hide itself so that the Earth did not know where it is, and should it do that? Should the spaceship communicate with Earth? Or there is a risk of attack of a hostile AI in this case?
9. Would not the creation of such projects exacerbate the arms race or lead to premature depletion of resources and other undesirable outcomes? Creating of pure hydrogen bombs would simplify the creation of such a spaceship, or at least reduce its costs. But at the same time it would increase global risks, because nuclear non-proliferation will suffer complete failure.
10. Will the Earth in the future compete with its independent colonies or will this lead to Star Wars?
11. If the ship goes off slowly enough, is it possible to destroy it from Earth, by self-propelling missile or with radiation beam?
12. Is this mission a real chance for survival of the mankind? Flown away are likely to be killed, because the chance of success of the mission is no more than 10 per cent. Remaining on the Earth may start to behave more risky, in logic: “Well, if we have protection against global risks, now we can start risky experiments.” As a result of the project total probability of survival decreases.
13. What are the chances that its computer network of the Ark will download the virus, if it will communicate with Earth? And if not, it will reduce the chances of success. It is possible competition for nearby stars, and faster machines would win it. Eventually there are not many nearby stars at distance of about 5 light years — Alpha Centauri, the Barnard star, and the competition can begin for them. It is also possible the existence of dark lonely planets or large asteroids without host-stars. Their density in the surrounding space should be 10 times greater than the density of stars, but to find them is extremely difficult. Also if nearest stars have not any planets or moons it would be a problem. Some stars, including Barnard, are inclined to extreme stellar flares, which could kill the expedition.
14. The spaceship will not protect people from hostile AI that finds a way to catch up. Also in case of war starships may be prestigious, and easily vulnerable targets — unmanned rocket will always be faster than a spaceship. If arks are sent to several nearby stars, it does not ensure their secrecy, as the destination will be known in advance. Phase transition of the vacuum, the explosion of the Sun or Jupiter or other extreme event can also destroy the spaceship. See e.g. A.Bolonkin “Artificial Explosion of Sun. AB-Criterion for Solar Detonation” http://www.scribd.com/doc/24541542/Artificial-Explosion-of-Sun-AB-Criterion-for-Solar-Detonation
15. However, the spaceship is too expensive protection from many other risks that do not require such far removal. People could hide from almost any pandemic in the well-isolated islands in the ocean. People can hide on the Moon from gray goo, collision with asteroid, supervolcano, irreversible global warming. The ark-spaceship will carry with it problems of genetic degradation, propensity for violence and self-destruction, as well as problems associated with limited human outlook and cognitive biases. Spaceship would only burden the problem of resource depletion, as well as of wars and of the arms race. Thus, the set of global risks from which the spaceship is the best protection, is quite narrow.
16. And most importantly: does it make sense now to begin this project? Anyway, there is no time to finish it before become real new risks and new ways to create spaceships using nanotech.
Of course it easy to envision nano and AI based Ark – it would be small as grain of sand, carry only one human egg or even DNA information, and could self-replicate. The main problem with it is that it could be created only ARTER the most dangerous period of human existence, which is the period just before Singularity.

Introduction
At a fundamental level, real wealth is the ability to fulfill human needs and desires. These ephemeral motivators are responsible for the creation of money, bank ledgers, and financial instruments that drive the world—caveat the fact that the monetary system can’t buy us love (and a few other necessities). Technologies have always provided us with tools that enable us to fulfill more needs and desires for more people with less effort. The exponential nanomanufacturing capabilities of Productive Nanosystems will simply enable us to do it better. Much better.

Productive Nanosystems
The National Nanotechnology Initiative defines nanotechnology as technologies that control matter at dimensions between one and a hundred nanometers, where unique phenomena enable novel applications. For particles and structures, reducing dimensions to the nanoscale primarily affects surface area to volume ratios and surface energies. For active structures and devices, the significant design parameters become exciton distances, quantum effects, and photon interactions. Connecting many different nanodevices into complex systems will multiply their power, leading some experts to predict that a particular kind of nanosystem—Productive Nanosystems that produces atomically precise products—will dramatically change the world.

Productive Nanosystems are programmable mechanoelectrochemical systems that are expected to rearrange bulk quantities numbers of atoms with atomic precision under programmatical control. There are currently four approaches that are expected to lead to Productive Nanosystems: DNA Origami[1], Bis-Peptide Synthesis[2], Patterned Atomic Layer Epitaxy[3], and Diamondoid Mechanosynthesis[4]. The first two are biomimetic bottom-up approaches that struggle to achieve long-range order and to increase complexity despite using chaotic thermodynamic processes. The second two are scanning-probe-based top-down approaches that struggle to increase productivity to a few hundred atoms per hour while reducing error rate.[5]

For the bottom-up approaches, the tipping point will be reached when researchers build the first nanosystem complex enough to do error correction. For the top-down approaches that can do error correction fairly easily, the tipping point will be reached when subsequent generations of tip arrays no longer need to be redesigned for speed and size improvements while using control algorithms that scale well (i.e. they only need generational time, synthesized inputs, and expansion room). When these milestones are reached, nanosystems will grow exponentially—unnoticeably for a few weeks, but suddenly they will become overwhelmingly powerful. There are many significant applications foreseen for mature Productive Nanosystems, ranging from aerospace and transportation to medicine and manufacturing—but what may affect us the hardest may be those applications that we can’t foresee.

Thus far, no scientific reason has been discovered that would prevent any of the four approaches from leading to Productive Nanosystems, much less all of them. So when an early desktop nanofactory prints out the next generation of Intel’s processor (without a $8 Billion microphotolithography fab plant), or a sailboat goes out for a weekend cruise and collects a few kilograms of gold or plutonium from seawater, people will sit up and take notice that the world has changed. Unfortunately, by then it will be a bit late — they will be like Neanderthals staring at a jet fighter that just thundered by overhead, and is already half-way to the horizon.

Combined with sufficient medical knowledge of how the human body should operate at the nanoscale, Productive Nanosystems may also be able to cure all known diseases, and perhaps even reverse the seven mechanisms of aging. For example, replacing red blood cells with microscopic artificial red blood cells (consisting of pressurized tanks and nanocomponents) will enable people to hold their breath for four hours.[6] Such simple nanobots (with less complexity than a microwave oven) may save the lives of many patients with blood and heart disorders. Other nanostructures, such as artificial kidneys with biocompatible nanomembranes, may prevent end-stage renal failure. One important caveat however, is that Productive Nanosystems can only move atoms around—they are useless when we don’t know where the atoms are supposed to go. Discovering the optimal positions of atoms for a particular application is new science, and inherently unpredictable.

In contrast to inventing new science, connecting nanodevices together to form a Productive Nanosystem is an engineering problem. If done correctly, it will make possible nanofactory appliances that can “print” anything (caveat the flexibility of the output envelope, the range and limits of the input molecules, the “printing” process, and the software).[7] These developments should increase our average standard of living to levels that would make Bill Gates look like a pauper, while reducing our carbon footprint to negative numbers, and replacing the energy and transportation infrastructures of the world.
Maybe.

After all, we currently have a technologically-enhanced standard of living that kings and pharaohs of old would envy, but we certainly haven’t reached utopia yet. On the other hand, atomically precise products made by Productive Nanosystems will be able to reduce economic dependency to a square meter of dirt and the sunshine that lands on it, while simultaneously lowering the price to orbit to $5/lb. Those kinds of technological capabilities might buy a significant amount of economic and political freedom.

Economics
The collisions between unstoppable juggernauts and immovable obstacles are always fascinating—we just cannot tear our eyes away from the immense conflict, especially if we have a glimmer of the immense consequences it will have for us. So it will be when Productive Nanosystems emerge from the global financial meltdown. To predict what will happen in the next decade or so, we must understand the essential nature of wealth, and we must understand the capabilities of productive nanosystems. Plus we must understand the consequences of their confluence. This is a tall order. Like any new technology, the development of Productive Nanosystems will depend on economics and politics, primarily the Rule of Law and enforceable contracts. But then the formidable power of Productive Nanosystems to do more with less will significantly affect some of the rules that govern economics and politics.

In the past few months, many people have panicked over plummeting retirement accounts, tumbling real estate values, and the loss of jobs by their coworkers (if not themselves). The government’s subsequent response has been equally shocking, as government spending has skyrocketed with brain-numbing strings of zeros being added to the national debt. Historically in both the U.S. and abroad, an expansion of the money supply in excess of the production of real goods and services has invariably produced inflation.

To make some sense of what is happening, and of how we might get out of this mess, it might be useful to re-examine the concept of wealth. Karl Marx’s “labor theory of value” identified human labor as the only source of wealth, but there are at least three major errors with this view. First, valuable material resources are spread unequally over this planet (which is why mining rights are so important). Second, tools can multiply the value of a person’s labor by many magnitudes (and since tools are generated by human labor and other tools, the direction and specific accomplishments of human labor become important). Third, political and social systems that incentivize different types of human behavior (and attitudes) will significantly increase or decrease the amount of real wealth available. Unfortunately, the tax rates of most political systems decrease the incentive to produce real wealth, and few of them provide an incentive to encourage the ultimate source of real wealth: the valuable ideas in the minds of inventors and innovators.

But what is that real wealth? Basically, it is the ability to fulfill human needs and desires. This means that (as subjective value theory claims), one person cannot know the needs and desires of another, and therefore all central planning schemes will fail. Statistics are fallible for a number of reasons, but mostly because reality is too complex: In the chaotic interplay of causal forces in the real world, the injection of a brilliant idea into a situation that is sensitive to initial conditions can change the world in very unpredictable ways. Also, central planning fails because human beings in power (i.e. politicians) are too susceptible to temptation (as in rent-seeking), and because the illogical passions that drive many human decisions cannot be encompassed by bureaucratic rules (or bureaucratic minds, for that matter).

By its very nature, real wealth requires government to uphold the inalienable rights of its citizens (including property rights), to provide for the common good by creating and orderly environment in which free citizens may prosper with their work, and to protect the weak from the strong. So government plays an important role in creating real wealth.

Wealth is often associated with money, but money is simply a counter: it replaced the barter of objects and services because it is an efficient marker that facilitates the exchange and storage of real wealth.[8]

Productive Nanosystems will only rearrange atoms, so they will not change what money and real wealth are. However, because Productive Nanosystems will provide a precise and powerful mechanism for rearranging atoms, they will be able to fulfill more human needs and desires than ever imaginable. But it still won’t be free.

Nanotechnologies and their applications will not be easily bartered, and atoms of different elements will still have relative scarcities (along with energy), so money will still be very useful. Unfortunately, it also means that deficit spending will still be inflationary. But will that be bad?

Early medieval Christian, Jewish, and Islamic societies all denounced usury as immoral, thereby preventing fractional reserve banking and inadvertently reducing the supply of available capital for business expansion. Some people are suspicious of the consequences and ethics of fractional reserve banking, based on an instinctive uneasiness that it seems like a Ponzi-scheme — creating money out of nothing. But while a Ponzi scheme is always based on extravagant promises and fraudulent misrepresentation, fractional reserve banking can serve a beneficial role (i.e. generate real wealth) as long as the fraction that banks choose to lend is commensurate with the velocity of money, risk weighted credit exposure, and the productivity of different forms of real wealth.[9] In today’s non-agricultural post-industrial society, the optimum reserve percentage has been calculated to be around 10%, and that is what the legal limit has been for some time. Unfortunately, greed being what it is, people have found loopholes in that law. In the United States this began occurring most notably in the early 1990s with the repeal of the Glass-Steagall Act of 1933 and the creation of Collateralized Debt Obligations.[10]

In the olden days, monetary expansion occurred when the king called in all the coins, shaved them or diluted the alloy that made them up, and then re-issued them. This was the old-fashioned form of deficit spending. This trick became easier with the invention of paper money, and became even more easy as financial services moved into electronic bits. Other than being a theft from future lenders by present borrowers, deficit spending skews the value decisions of consumers and investors, causing them to spend and invest money differently than they would if they knew how much real money actually existed. Another problem develops when bankers start underwriting government bonds, giving them powerful incentives for pressuring governments to maximize profit for themselves—not to benefit the country or its citizens (this is especially true when those in power build monopolies to reduce competition).

The expenses of running a bank, along with the expansion of the money supply via fractional reserve banking means that lenders must charge a reasonable interest rate to stay in business (at the same time, the exploitation of the poor by charging exorbitant interest is certainly unjust). The expansion of the money supply then maximizes the productivity of human labor as population grows and technology improves. This is why most economists think that the money supply should expand at the same rate as the growth in goods and services. Otherwise deflation occurs as the exchange value of the money increases to meet the expanded demand. At best, deflation only makes it more difficult for businesses get loans for expansion; at worst it signals the beginning of a deflationary spiral, in which falling prices give consumers an incentive to delay purchases until prices fall further, which in turn reduces overall economic activity, etc.

Thus deficit spending skews the economical signal between production and consumption. This is why it is harmful, especially as deficit spending increases, and especially if the spending is politically charged. With respect to nanotechnology, the salient point is that deficit spending incentivizes short-run gains over long term investments. The real problem is that this bias makes the investment necessary for nanotechnology-enabled productivity much more difficult to attain, even though such an investment could ameliorate the negative impact of the current deficit spending.

Nanotechnology can do nothing about correcting distorted economic signals. However, nanotechnology can increase productivity. And if it increases productivity as fast as the money supply grows, then we may not suffer from hyperinflation—though admittedly outracing politicians on a spending binge will be no mean trick. Whether it does or doesn’t depends on some sensitive initial conditions that may or may not trigger a psychological tipping point at which many people realize that more claim-tickets (dollars) to wealth have been printed (or stored as zeros in some computer’s memory) than can ever be redeemed. So they start selling panic- selling—exchanging paper or electronic money for anything with a more solid aspect of reality. The enhanced properties of primitive nanotech-enabled products will certainly have a dramatic effect on reality—this will be even more true with Productive Nanosystems—many of which may seem miraculous. Why worry about whether the numbers in your checking account are “real” as long as they cover the credit card bill next month for medical nanobots we would buy online and download today? The big question is *if* the medical nanobots will really be available or not.

Unfortunately, even in the best case many individuals will suffer because hyper-increased productivity may cause hyper-increased money flows. If the flow of money hyper-accelerate does (and even if it doesn’t), the hyper-acceleration of productivity will undoubtedly cause more economic and social turbulence than most people can handle. This is a matter for concern, because many scenarios predict very significant amounts of turbulence as Productive Nanosystems reach a tipping point. By analogy, the recent financial meltdown is to the nanotech revolution what a kindergarten play dress rehearsal is to the Normandy invasion.

Why is the advent of Productive Nanosystems so significant, why is it bad (if it is), and what are we going to do about it?

First, it seems obvious that a rapid commercialization of Productive Nanosystems will cause turbulent economic fluctuations that hurt people who aren’t fast enough to adjust to them. But how do we know that Productive Nanosystems will cause massive fluctuations?

Briefly, it is because they are so powerful. For example, building nanoelectronic circuits on a desktop “printer” instead of a fab plant will probably bankrupt the many companies needed to build the fab plant (no matter whether it is a mere $2B as it is today, or whether it may top $50B as expected a few Moore’s generations from now). It is difficult to predict what would happen if the desktop “printer”, or nanofactory, could print a copy of itself, but a continuation of “business as usual” would not be possible with such an invension.

Second, why is the quick development of Productive Nanosystems bad? Or is it?

Though many Americans today have adequate material comforts, we do not have some of the freedoms taken for granted by kings of old. Trinkets and baubles are not equivalent to freedom, and nanotech-enabled trinkets are trinkets nonetheless. On the other hand, atomically precise products made by productive nanosystems will be able to reduce economic dependency to a square meter of dirt and the sunshine that lands on it, and lower the price to orbit to $5/lb. Those kinds of abilities will buy a significant amount of economic and political freedom, especially for those with more than a square meter of dirt and sunshine. Just as the settlement of the New World had large effects on the Old, an expansion off-planet would have huge implications for those who stay behind. Given such possibilities and pushing Bill Joy’s overwrought fears of nanotechnology aside,[11] it seems that there is cause for concern, but there is also cause for hope.

Third, what are we going to do about it?

Part of the problem is that the future is not clear. Throwing more smart people at the problem might help reduce the amount of uncertainty, but only if the smart people understand why some events are more likely to occur. Then they need to explain to us and to policy makers the technical possibilities of Productive Nanosystems and their social consequences.

Second, we need to invest in Productive Nanosystems. Historically, we know that companies such as Google and Samsung, who increased their R&D spending after the dotcom bubble of 2001, came out much stronger than their competition did. In 2003, China ranked third in the world in number of nanotechnology patents, but in recent months Tsinghua University has often had more than twice as many nanotechnology patents pending as any other U.S. university or organization. Earlier, the Chinese had duplicated [12] Rothemund’s DNA Origami experiment within months of the publication of his seminal article in Nature. Those who invest more money with more wisdom will do much better than those who do not invest, or who invest foolishly.

The other part of the problem is that we often don’t have the intestinal fortitude to do what is right, even when we know what it is. As human beings, we are easily tempted. Neither increased intelligence nor mature Productive Nanosystems will ever help us get around this problem. About the only thing we can do is practice ethical and moral behavior now, so that we get into the habit now before the consequences become enormous. Then again, judging from the recorded history, legends, and stories from ancient sources, the last six thousand years of practice has not done us much good.

Some of our current financial meltdown occurred because we were soft-hearted and soft-headed, encouraging the making of loans to people who couldn’t pay them back. Other financial problems occurred because of greed—the attempt to make money quickly without creating real wealth. Unfortunately, the enormous productivity promise of Productive Nanosystems may only encourage that type of risky gambling.

There is also the problem that poverty may not only be the lack of money. This means that in a Productive Nanosystem-driven economy, poverty will not be the lack of real wealth, but something else. If that is true, then what is real poverty? Is it ignorance? Self-imposed unhappiness? The suffering of injustice? I don’t know, but I suspect that just as obesity plagues the poor more than the rich, a hyper-abundant society will reveal social dysfunctions that seem counterintuitive to us today. Some characteristic disfunctionalities, such as wealth producing sloth, are obvious. Others are not, and they are the ones that will trap numerous unsuspecting victims.

Eric Drexler has identified a few things that will be valuable in a hyper-abundant society: new scientific knowledge, and land area on Earth (the limit of which has been a cause of wars since humans first left Africa). Given the additional gifts of disease-free and ageless bodies, I would add a few more valuables, listed by increasing importance: the respect of a community, the trust of friends outside the increasingly byzantine labyrinth of law, the admiration of children (especially your own), the total lifelong commitment of a spouse, and the peace of knowing one’s unique destiny in this universe. We should all be as lucky.

Footnotes
1. Paul W. K. Rothemund, Folding DNA to create nanoscale shapes and patterns, Nature, Vol 440, 16 March 2006.
2. Christian Schafmeister, Molecular lego. Scientific American 2007;296(2):64–71.
3. John Randall, et al., Patterned atomic layer epitaxy — Patent 7326293
4. Robert A. Freitas Jr., Ralph C. Merkle, “A Minimal Toolset for Positional Diamond Mechanosynthesis,” J. Comput. Theor. Nanosci. 5(May 2008):760-861; http://www.MolecularAssembler.com/Papers/MinToolset.pdf
5. The Zyvex-led Atomically Precise Manufacturing Consortium has recently met their DARPA-funded Tip-Based Nanofabrication project’s Phase I metrics by writing 100 dangling bond wires, half of them 36.6nm x 3.5nm and half 24.5nm x 3.5 nm in 5.66 minutes. That is 1.5 million atoms per hour, but the error rate was ±6.4%, which is unacceptable for Productive Nanosystems (unless they implement error correction, which for Patterned Atomic Layer Epitaxy may or may not be easy because the high mobility of hydrogen at the operating temperature of the process).
6. Tihamer Toth-Fejel. Respirocytes from Patterned Atomic Layer Epitaxy: The Most Conservative Pathway to the. Simplest Medical Nanorobot. 2nd Unither Nanomedical and Telemedicine Technology Conference. Quebec, Canada. February 24–27, 2009. www.unithertechnologyconference.com/downloads09/SessionsDayOne/TIHAMER_web.ppt
7. Chris Phoenix and Tihamer Toth-Fejel, Large-Product General-Purpose Design and Manufacturing Using Nanoscale Modules: Final Report, CP-04-01, NASA Institute for Advanced Concepts, May 2005. http://www.niac.usra.edu/files/studies/final_report/1030Phoenix.pdf
8. The Federal Reserve distinguishes value exchange as M1 and the [storage] of value as M2. For a good description of the history and role of money, see Alan Greenspan, Gold and Economic Freedom. http://www.constitution.org/mon/greenspan_gold.htm
9. Karl Denninger describes the benefits and drawbacks of fractional reserve banking, pointing out that the key determinate is whether or not the debts incurred are productive (e.g. investments in tooling, land, or education) vs. consumptive (e.g. heating a house, buying a bigscreen TV, or going on vacation). See http://market-ticker.denninger.net/archives/865-Reserve-Banking.html
10. Marc and Nathalie Fleury, The Financial Crisis for Dummies: Securitization. http://www.thedelphicfuture.org/2009/04/financial-crisis-for-dummies.html
11. Bill Joy, Why the future doesn’t need us. Wired (Apr 2000) http://www.wired.com/wired/archive/8.04/joy.html On some issues, Bill Joy was so far off that he wasn’t even wrong. See “Why the Future Needs Bill Joy” http://www.islandone.org/MMSG/BillJoyWhyCrit.htm
12. Qian Lulu, et al., Analogic China map constructed by DNA. Chinese Science Bulletin. Dec 2006. Vol. 51 No. 24

Acknowledgements
Thanks to Forrest Bishop, Jim Osborn, and Andrew Balet for many excellent critical comments on earlier drafts.

Tihamer Toth-Fejel, MS
General Dynamics Advanced Information Systems
Michigan Research and Development Center

The main ways of solving the Fermi Paradox are:
1) They are already here (at least in the form of their signals)
2) They do not disseminate in the universe, do not leave traces, and not send signals. That is, they do not start a shock wave of intelligence.
3) The civilizations are extremely rare.
Additional way of thinking is 4): we are unique civilization because of observation selection
All of them have a sad outlook for global risk:
In the first case, we are under threat of conflict with superior aliens.
1A) If they are already here, we can do something that will encourage them to destroy us, or restrict us. For example, turn off the simulation. Or start the program of probes-berserkers. This probes cold be nanobots. In fact it could be something like “Space gray goo” with low intelligence but very wide spreading. It could even be in my room. The only goal of it could be to destroy other nanobots (like our Nanoshield would do). And so we will see it until we create our own nanobots.
1b) If they open up our star system right now and, moreover, focused on total colonization of all systems, we are also will fight with them and are likely to lose. Not probable.
1c) If a large portion of civilization is infected with SETI-virus and distributes signals, specially designed to infect naive civilizations — that is, encourage them to create a computer with AI, aimed at the further replication by SETI channels. This is what I write in the article Is SETI dangerous? http://www.proza.ru/texts/2008/04/12/55.html
1d) By the means of METI signal we attract attention of dangerous civilization and it will send to the solar system a beam of death (probably commonly known as gamma-ray burst). This scenario seems unlikely, since for the time until they receive the signal and have time to react, we have time to fly away from the solar system — if they are far away. And if they are close, it is not clear why they were not here. However, this risk was intensely discussed, for example by D. Brin.
2. They do not disseminate in space. This means that either:
2a) Civilizations are very likely to destroy themselves in very early stages, before it could start wave of robots replicators and we are not exception. This is reinforced by the Doomsday argument – namely the fact that I’m discovering myself in a young civilization suggests that they are much more common than the old. However, based on the expected rate of development of nanotechnology and artificial intelligence, we can start a wave of replicators have in 10–20 years, and even if we die then, this wave will continue to spread throughout the universe. Given the uneven development of civilizations, it is difficult to assume that none of them do not have time to launch a wave of replicators before their death. This is possible only if we a) do not see an inevitable and universal threat looming directly on us in the near future, b) significantly underestimate the difficulty of creating artificial intelligence and nanoreplicators. с) The energy of the inevitable destruction is so great that it manages to destroy all replicators, which were launched by civilization — that is it is of the order of a supernova explosion.
2b) Every civilization sharply limit itself — and this limitation is very hard and long as it is simple enough to run at least one probe-replicator. This restriction may be based either on a powerful totalitarianism, or the extreme depletion of resources. Again in this case, our prospects are quite unpleasant. Bur this solution is not very plausible.
3) If civilization are rare, it means that the universe is much less friendly place to live, and we are on an island of stability, which is likely to be an exception from the rule. This may mean that we underestimate the time of the future sustainability of the important processes for us (the solar luminosity, the earth’s crust), and most importantly, the sustainability of these processes to small influences, that is their fragility. I mean that we can inadvertently break their levels of resistance, carrying out geo-engineering activities, the complex physics experiments and mastering space. More I speak about this in the article: “Why antropic principle stopped to defend us. Observation selection and fragility of our environment”. http://www.scribd.com/doc/8729933/Why-antropic-principle-stops-to-defend-us-Observation-selection-and-fragility-of-our-environment- See also the works of M.Circovic on the same subject.
However, this fragility is not inevitable and depends on what factors were critical in the Great filter. In addition, we are not necessarily would pressure on this fragile, even if it exist.
4) Observation selection makes us unique civilization.
4a. We are the first civilization, because any civilization which is the first captures the whole galaxy. Likewise, the earthly life is the first life on Earth, because it would require all swimming pools with a nutrient broth, in which could appear another life. In any case, sooner or later we will face another first civilization.
4b. Vast majority of civilizations are being destroyed in the process of colonization of the galaxy, and so we can find ourselves only in the civilization which is not destroyed by chance. Here the obvious risk is that those who made this error, would try to correct it.
4c. We wonder about the absence of contact, because we are not in contact. That is, we are in a unique position, which does not allow any conclusions about the nature of the universe. This clearly contradicts the Copernican principle.
The worst variant for us here is 2a — imminent self-destruction, which, however, has independent confirmation through the Doomsday Argument, but is undermine by the fact that we do not see alien von Neuman probes. I still believe that the most likely scenario is a Rare earth.


Paul J. Crutzen

Although this is the scenario we all hope (and work hard) to avoid — the consequences should be of interest to all who are interested in mitigation of the risk of mass extinction:

“WHEN Nobel prize-winning atmospheric chemist Paul Crutzen coined the word Anthropocene around 10 years ago, he gave birth to a powerful idea: that human activity is now affecting the Earth so profoundly that we are entering a new geological epoch.

The Anthropocene has yet to be accepted as a geological time period, but if it is, it may turn out to be the shortest — and the last. It is not hard to imagine the epoch ending just a few hundred years after it started, in an orgy of global warming and overconsumption.

Let’s suppose that happens. Humanity’s ever-expanding footprint on the natural world leads, in two or three hundred years, to ecological collapse and a mass extinction. Without fossil fuels to support agriculture, humanity would be in trouble. “A lot of things have to die, and a lot of those things are going to be people,” says Tony Barnosky, a palaeontologist at the University of California, Berkeley. In this most pessimistic of scenarios, society would collapse, leaving just a few hundred thousand eking out a meagre existence in a new Stone Age.

Whether our species would survive is hard to predict, but what of the fate of the Earth itself? It is often said that when we talk about “saving the planet” we are really talking about saving ourselves: the planet will be just fine without us. But would it? Or would an end-Anthropocene cataclysm damage it so badly that it becomes a sterile wasteland?

The only way to know is to look back into our planet’s past. Neither abrupt global warming nor mass extinction are unique to the present day. The Earth has been here before. So what can we expect this time?”

Read the entire article in New Scientist.

Also read “Climate change: melting ice will trigger wave of natural disasters” in the Guardian about the potential devastating effects of methane hydrates released from melting permafrost in Siberia and from the ocean floor.

Peter Garretson from the Lifeboat Advisory Board appears in the latest edition of New Scientist:

“IT LOOKS inconsequential enough, the faint little spot moving leisurely across the sky. The mountain-top telescope that just detected it is taking it very seriously, though. It is an asteroid, one never seen before. Rapid-survey telescopes discover thousands of asteroids every year, but there’s something very particular about this one. The telescope’s software decides to wake several human astronomers with a text message they hoped they would never receive. The asteroid is on a collision course with Earth. It is the size of a skyscraper and it’s big enough to raze a city to the ground. Oh, and it will be here in three days.

Far-fetched it might seem, but this scenario is all too plausible. Certainly it is realistic enough that the US air force recently brought together scientists, military officers and emergency-response officials for the first time to assess the nation’s ability to cope, should it come to pass.

They were asked to imagine how their respective organisations would respond to a mythical asteroid called Innoculatus striking the Earth after just three days’ warning. The asteroid consisted of two parts: a pile of rubble 270 metres across which was destined to splash down in the Atlantic Ocean off the west coast of Africa, and a 50-metre-wide rock heading, in true Hollywood style, directly for Washington DC.

The exercise, which took place in December 2008, exposed the chilling dangers asteroids pose. Not only is there no plan for what to do when an asteroid hits, but our early-warning systems — which could make the difference between life and death — are woefully inadequate. The meeting provided just the wake-up call organiser Peter Garreston had hoped to create. He has long been concerned about the threat of an impact. “As a taxpayer, I would appreciate my air force taking a look at something that would be certainly as bad as nuclear terrorism in a city, and potentially a civilisation-ending event,” he says.”

Read the entire article at New Scientist. Read the NASA NEO report “Natural Impact Hazard Interagancy Deliberate Planning Exercise After Action Report”.

Nature News reports of a growing concern over different standards for DNA screening and biosecurity:

“A standards war is brewing in the gene-synthesis industry. At stake is the way that the industry screens orders for hazardous toxins and genes, such as pieces of deadly viruses and bacteria. Two competing groups of companies are now proposing different sets of screening standards, and the results could be crucial for global biosecurity.

“If you have a company that persists with a lower standard, you can drag the industry down to a lower level,” says lawyer Stephen Maurer of the University of California, Berkeley, who is studying how the industry is developing responsible practices. “Now we have a standards war that is a race to the bottom.”

For more than a year a European consortium of companies called the International Association of Synthetic Biology (IASB) based in Heidelberg, Germany, has been drawing up a code of conduct that includes gene-screening standards. Then, at a meeting in San Francisco last month, two of the leading companies — DNA2.0 of Menlo Park, California, and Geneart of Regensburg, Germany — announced that they had formulated a code of conduct that differs in one key respect from the IASB recommendations.”

Read the entire article on Nature News.

Also read “Craig Venter’s Team Reports Key Advance in Synthetic Biology” from JCVI.

I recently began to worry that something/someone, some field, force, disease, prion, virus, bad luck and/or natural causes could threaten and perhaps destroy the most valuable entity in the universe, an entity more valuable than life itself. Consciousness. What good is life extension without conscious awareness? What is consciousness?

We know the brain works a lot like a computer, with neuron firings and synapses acting like bit states and switches. Brain-as-computer works very well to account for sensory processing, control of behavior, learning and other cognitive functions. These functions may in some cases be non-conscious, and other times associated with conscious experience and control. Scientists seek the distinction – the essential feature, or trick for consciousness.

Some suggest there is no trick, consciousness emerges as a by-product of cognitive computation among neurons. Others say we don’t know, that consciousness may indeed require some feature related to, but not quite the same as neuron-to-neuron cognition.

In either case, humans and other creatures could in principle become devoid of consciousness while maintaining cognitive behaviors, appearing more-or-less normal to outside observers. Such hypothetical non-conscious behaving entities are referred to in literature, films and philosophical texts as ‘zombies’. Philosopher David Chalmers introduced the philosophical zombie, a test case for whether or not consciousness is distinct from cognitive neurocomputation.

I’ve studied and researched consciousness for over 35 years, and work as an anesthesiologist, erasing and restoring consciousness several times per day for surgery. Patients under anesthesia are not zombies. They lack consciousness but also lack cognition. On the other hand, for a very brief period after first emerging from anesthesia following surgery, my patients seem like zombies, behaving purposely but blankly. Like in the old song “She’s not there” by…..The Zombies.

During a routine surgery recently, one of the nurses was talking about a book called ‘Patient Zero’ in which a terrorist group turned people into zombies the terrorists were then able to control. I later discovered there exists an entire genre of zombie terror books and films (‘Invasion of the body snatchers’ being perhaps the original). Could it be possible? How could we protect ourselves from consciousness-snatchers who want to turn us into zombies? Well, we need to understand what consciousness is (but, so do ‘they’).

We do know consciousness correlates with a particular coherent EEG gamma synchrony. Somehow selectively blocking EEG brain-wide coherence while sparing neuron-to-neuron computation and cognition could conceivably erase consciousness. But I would bet on an even more subtle and profound feature or trick. For example I personally believe (with Sir Roger Penrose) that consciousness involves quantum computations in microtubules inside brain neurons.

Microtubules are the major structural component of the neuronal cytoskeleton whose disruption is an essential feature of Alzheimers disease. Microtubules dynamically organize intra-neuronal and synaptic activities, conduct signals, have collective vibrational and electromagnetic modes and quite possibly mesoscopic quantum states. Motor proteins and biomolecular agents traverse and interact with microtubules.

I became obsessed with microtubules in medical school in the early 1970s. Their cylindrical lattice structure of ‘tubulin’ protein subunits looked to me like a computing switching circuit. Through the 1980s, colleagues and I developed models of microtubule information processing in which states of tubulin subunits were bits interacting with lattice neighbor tubulins. With about 107 (10 to the seventh) tubulins per neuron switching at 10^−9 seconds, we calculated a potential for 1016 operations per second in each neuron. This was, and remains unpopular in AI/Singularity circles because it potentially pushes the goalpost for brain capacity significantly. Recent evidence has shown collective microtubule excitations at 10^−7 seconds (rather than the 10^−9 seconds we assumed), indicating a neuronal information capacity of ‘only’ 1014 operations per second.

But here’s the really good news. Microtubules self-assemble. With proper conditions tubulins polymerize into microtubules, and with associated proteins into networks of cross-linked microtubules. In principle, tubulin and other necessary proteins can be genetically mass-produced, and then self-assemble into large arrays. If microtubules process molecular-scale information (quantum or classical), appropriate arrays of microtubules could serve as a repository of consciousness — a ‘Lifeboat’.

These could be useful. Evil forces aside, consciousness-snatchers include aging, disease and death. In 1987 I wrote a book about microtubule information processing based entirely on classical (non-quantum) processes. The brief, concluding chapter considered arrays of microtubules as orbiting consciousness Lifeboats. It foreshadowed the Singularity, and in retrospect also applies to quantum processes. The chapter follows below. And we should understand consciousness not just to preserve it, but to enhance it in any way possible.

From
Ultimate computing: Biomolecular consciousness and nanotechnology
Elsevier, 1987
http://www.quantumconsciousness.org/ultimatecomputing.html

11 The Future of Consciousness

Nanotechnology may enable the dream of Mind/Tech merger to materialize. At long last, debates about the nature of consciousness will move from the domain of philosophy to large scale experiments. The visions of consciousness interfacing with, or existing within, computers or mind piloted robots expressed by Moravec, Margulis, Sagan and Max Headroom could be realized. Symbiotic association of replicative nanodevices and cytoskeletal networks within living cells could not only counter disease processes, but lead to exchange of information encoded in the collective dynamic patterns of cytoskeletal subunit states. If these are indeed the roots of consciousness, a science fiction-like deciphering and transfer of mind content may become possible. One possible scenario could utilize a small window in a specific brain region. Hippocampal temporal lobe, a site where memories enter and where electromagnetic radiation from outside the skull penetrates most readily and harmlessly, is one possible area where information distributed throughout the brain may perhaps be accessed and manipulated. Techniques such as laser interferometry, electroacoustical probes scanned over brain surfaces, or replicative nanoprobes immunotargeted to key hippocampal tubulins, MAPs, and other cytoskeletal components might be developed to perceive and transmit the content of consciousness.

What technological device would be capable of receiving and housing the information emanating from some 1015 tubulin subunits changing state some 109 times per second? One possibility is a customized array of nanoscale automata, perhaps utilizing superconducting materials. Another possibility is a genetically engineered array of some 1015 tubulin subunits (or many more) assembled into parallel tensegrity arrays of interconnected microtubules, and other cytoskeletal structures. Current and near future genetic engineering capabilities should enable isolation of genes responsible for a specific individual’s brain cytoskeletal proteins, and reconstitution in an appropriate medium. Thus the two evident sources of mind content (heredity and experience) may be eventually reunited in an artificial consciousness environment. A polymerized cytoskeletal array would be highly unstable and dependent on biochemical, hormonal, and pharmacological maintenance of its medium. Precise monitoring and control of cytoskeletal consciousness environments may become an important new branch of anesthesiology. Polymerization of cell-free cytoskeletal lattices would be limited in size (and potential intellect) due to gravitational collapse. Possible remedies might include hybridizing the cytoskeletal array by metal deposition, symbiosis with synthetic nanoreplicators, or placement of the cytoskeletal array in a zero gravity environment. Perhaps future consciousness vaults will be constructed in orbiting space stations or satellites. People with terminal illnesses may choose to deposit their mind in such a place, where their consciousness can exist indefinitely, and (because of enhanced cooperative resonance) in a far greater magnitude. Perhaps many minds can comingle in a single large array, obviating loneliness, but raising new sociopolitical issues. Entertainment, earth communication, and biochemical mood and maintenance can be supplied by robotics, perhaps leading to the next symbiosis-robotic space voyagers (shaped like centrioles?) whose intelligence is derived from cytoskeletal consciousness.

Yes, this is science fiction. Will it become reality like so much previous science fiction has? Probably not precisely as suggested; but if past events are valid indicators, the future of consciousness may be even more outrageous.

50 years ago Herman Khan coined the term in his book “On thermonuclear war”. His ideas are still important. Now we can read what he really said online. His main ideas are that DM is feasable, that it will cost around 10–100 billion USD, it will be much cheaper in the future and there are good rational reasons to built it as ultimate mean of defence, but better not to built it, because it will lead to DM-race between states with more and more dangerous and effective DM as outcome. And this race will not be stable, but provoking one side to strike first. This book and especially this chapter inspired “Dr. Strangelove” movie of Kubrick.
Herman Khan. On Doomsday machine.

Abstract:

President Obama disbanded the President’s Council on Bioethics after it questioned his policy on embryonic stem cell research. White House press officer Reid Cherlin said that this was because the Council favored discussion over developing a shared consensus. This column lists a number of problems with Obama’s decision, and with his position on the most controversial bioethical issue of our time.

Bioethics and the End of Discussion

In early June, President Obama disbanded the President’s Council on Bioethics. According to White House press officer Reid Cherlin, this was because the Council was designed by the Bush administration to be “a philosophically leaning advisory group” that favored discussion over developing a shared consensus. http://www.nytimes.com/2009/06/18/us/politics/18ethics.html?_r=2

Shared consensus? Like the shared consensus about the Mexico City policy, government funding of Embryonic Stem Cell Research for new lines, or taxpayer funded abortions? All this despite the fact that 51% of Americans consider themselves pro-life? By allowing publicly-funded Embryonic Stem Cell Research only on existing lines, President Bush made a decision that nobody was happy with, but at least it was an honest compromise, and given the principle of second effect, an ethically acceptable one.

President Obama will appoint a new bioethics commission, one with a new mandate and that “offers practical policy options,” Mr. Cherlin said.

Practical policy options? Like the ones likely to be given by Obama’s new authoritative committee to expediently promote the license to kill the most innocent and vulnerable? But that is only the start. As the baby boomers bankrupt Social Security, there will be a strong temptation to expand Obama’s mandate to include the aging “useless mouths”. Oregon and the Netherlands have already shown the way—after all, a suicide pill is much cheaper than palliative care, and it’s much more cost-effective to kill patients rather than care for them. (http://www.euthanasia.com/argumentsagainsteuthanasia.html)

Evan Rosa details many problems with Obama’s decision to disband the Council (http://www.cbc-network.org/research_display.php?id=388), but there are additional disturbing implications:

First, democracies are absolutely dependent on discussion. Dictators have always suppressed free discussion on “sensitive” subjects because it is the nature of evil to fear criticism. This has been true here in the United States, too—in the years leading up to the Civil War, Southern senators and representatives tried to squelch all discussion on slavery. Maybe their consciences bothered them.

Second, no matter how well-meaning the participants may be, consensus between metaphysically opposed parties is impossible in some matters (such as the humanity of a baby a few months before he or she is born, the existence of God, consequentialist vs. deontological reasoning, etc.). The only way to get “consensus” in such situations is by exercising the monopoly of force owned by the government.

Third, stopping government-sponsored discussion on bioethics sets a dangerous precedent for the ethics surrounding nanotechnology. There are numerous ethical issues that nanotechnology is raising, and will continue to raise, that desperately require significant amounts of detailed discussion and deep thinking.

Tyrants begin by marginalizing anyone who disagrees with them, calling them hate-mongering obstructionists (or worse). In addition, they will use governmental power to subdue any who dare oppose their policies.

The details of the dismissal of the Council clearly shows this tendency, though the Council members are not acting very subdued. As one of them supposedly put it, “Instead of meeting at seminars, now we’ll be meeting on Facebook.”

On March 9, Obama removed restrictions on federal funding for research on embryonic stem cell lines derived by means that destroy human embryos.

On March 25, ten out of the eighteen members of the Council questioned Obama’s policy (http://www.thehastingscenter.org/Bioethicsforum/Post.aspx?id=3298).

In the second week of June, Obama fired them all.

Could it be that Obama doesn’t want discussion? We can see what happens if someone gives him advice that he doesn’t want.

Oprah Winfrey’s favorite physician Dr. Mehmet Oz, told her and Michael Fox that “the stem cell debate is dead” because “the problem with embryonic stem cells is that [they are]… very hard to control, and they can become cancerous” (http://www.oprah.com/media/20090319-tows-dr-oz-brain). Besides, induced pluripotent cells can become embryonic, thereby negating the very difficult necessity of cloning.

So “harvesting” embryonic stem cells is not only ethically problematic (i.e. wrong), but it is also scientifically untenable. Obama supports it anyway.

Maybe he could fire Oprah.

Tihamer Toth-Fejel, MS
General Dynamics Advanced Information Systems
Michigan Research and Development Center

Artificial brain ’10 years away’

By Jonathan Fildes
Technology reporter, BBC News, Oxford

A detailed, functional artificial human brain can be built within the next 10 years, a leading scientist has claimed.

Henry Markram, director of the Blue Brain Project, has already simulated elements of a rat brain.

He told the TED Global conference in Oxford that a synthetic human brain would be of particular use finding treatments for mental illnesses.

Around two billion people are thought to suffer some kind of brain impairment, he said.

“It is not impossible to build a human brain and we can do it in 10 years,” he said.

“And if we do succeed, we will send a hologram to TED to talk.”

‘Shared fabric’

The Blue Brain project was launched in 2005 and aims to reverse engineer the mammalian brain from laboratory data.

In particular, his team has focused on the neocortical column — repetitive units of the mammalian brain known as the neocortex.

Neurons

The team are trying to reverse engineer the brain

“It’s a new brain,” he explained. “The mammals needed it because they had to cope with parenthood, social interactions complex cognitive functions.

“It was so successful an evolution from mouse to man it expanded about a thousand fold in terms of the numbers of units to produce this almost frightening organ.”

And that evolution continues, he said. “It is evolving at an enormous speed.”

Over the last 15 years, Professor Markram and his team have picked apart the structure of the neocortical column.

“It’s a bit like going and cataloguing a bit of the rainforest — how may trees does it have, what shape are the trees, how many of each type of tree do we have, what is the position of the trees,” he said.

“But it is a bit more than cataloguing because you have to describe and discover all the rules of communication, the rules of connectivity.”

The project now has a software model of “tens of thousands” of neurons — each one of which is different — which has allowed them to digitally construct an artificial neocortical column.

Although each neuron is unique, the team has found the patterns of circuitry in different brains have common patterns.

“Even though your brain may be smaller, bigger, may have different morphologies of neurons — we do actually share the same fabric,” he said.

“And we think this is species specific, which could explain why we can’t communicate across species.”

World view

To make the model come alive, the team feeds the models and a few algorithms into a supercomputer.

“You need one laptop to do all the calculations for one neuron,” he said. “So you need ten thousand laptops.”

Computer-generated image of a human brain

The research could give insights into brain disease

Instead, he uses an IBM Blue Gene machine with 10,000 processors.

Simulations have started to give the researchers clues about how the brain works.

For example, they can show the brain a picture — say, of a flower — and follow the electrical activity in the machine.

“You excite the system and it actually creates its own representation,” he said.

Ultimately, the aim would be to extract that representation and project it so that researchers could see directly how a brain perceives the world.

But as well as advancing neuroscience and philosophy, the Blue Brain project has other practical applications.

For example, by pooling all the world’s neuroscience data on animals — to create a “Noah’s Ark”, researchers may be able to build animal models.

“We cannot keep on doing animal experiments forever,” said Professor Markram.

It may also give researchers new insights into diseases of the brain.

“There are two billion people on the planet affected by mental disorder,” he told the audience.

The project may give insights into new treatments, he said.

The TED Global conference runs from 21 to 24 July in Oxford, UK.