Toggle light / dark theme

If you ever swore to yourself (or to another) that you’d never get a tattoo, you may just want to reconsider. You may within just a couple of years have a very good reason to get one made out of “nanoink”.

As recently reported on Discovery News, “nanoink” allows for monitoring blood glucose in real-time right under the skin. It does so by using a hydrophobic nanoparticle that changes colors as glucose levels rise and fall. The ink consists of a glucose-detecting molecule, a color changing dye and a molecule that mimics glucose. These three particles continuously swish around inside a 120-nm orb. When glucose is present, the glucose-detecting molecule attaches and glows yellow; if absent, the ink turns orange.

The use of this technology has the advantage over traditional glucose monitoring, of course, in that there is a one-time needle stick for placing the tattoo over the tens of thousands of sticks that a diabetic will need to have over a lifetime.

Another advantage of nanoink tattooing: they can be removed. At least one researcher from Brown University has developed tattoo ink with microencapsulated beads coated with a polymer that when broken with a single laser treatment can simply be expelled from the body, as opposed to multiple laser removal treatments for conventional tattoos.

Diabetes isn’t the only disease candidate for using this technology. The original research involving nanoink tattoos was for monitoring sodium levels in the body, but then it occurred to researchers that glucose could be infinitely more useful as a disease target. The potential uses for “nanoink” as a monitoring technology are almost limitless; for chronic disease monitoring, once the concept can be proven to work for more complex molecules such as glucose, almost any disease could be monitored from heart disease to hyperthyroid to various blood disorders.

According to the researchers at Draper Laboratories studying this technology, the tattoo doesn’t have to be a huge Tweety bird on your ankle or heart on your shoulder; in fact, according to one of the Draper researchers, the tattoo could be just a “few millimeters in size and wouldn’t have to go as deep as a normal tattoo”.
Disease monitoring nano-tattoos, therefore, can be both tiny and painless. Of course, they could be stylish, too, but the nanoink is likely to cost a pretty penny—so before you are imagine a giant tribal arm stamp to monitor your heart disease, you may have to think again.

It may be at least two years before tattoos for monitoring your diabetes are available on the market—so unfortunately, those strips and sticking of fingers and thumbs aren’t going away for diabetics any time soon. But hopefully, someday in the not so distant future, nanotechnology will make the quality of life just a little bit better for diabetics and perhaps improve the disease management for other chronic diseases like heart disease and others as well. In the meantime, you can dream up what you want your “nanoink” tattoo to look like.

Summer Johnson, PhD
Column Editor, Lifeboat Foundation
Executive Managing Editor, The American Journal of Bioethics

I have translated into Russian “Lifeboat Foundation Nanoshield” http://www.scribd.com/doc/12113758/Nano-Shield and I have some thoughts about it:

1) The effective mean of defense against ecofagy would be to turn in advance all the matter on the Earth into nanorobots. Just as every human body is composed of living cells (although this does not preclude the emergence of cancer cells). The visible world would not change. All object will consist of nano-cells, which would have sufficient immune potential to resist almost any foreseeable ecofagy. (Except purely informational like computer viruses). Even in each leaving cell would be small nanobot, which would control it. Maybe the world already consists of nanobots.
2) The authors of the project suggest that ecofagic attack would consist of two phases — reproduction and destruction. However, creators of ecofagy, could make three phases — first phase would be a quiet distribution throughout the Earth’s surface, under surfase, in the water and air. In this phase nanorobots will multiply in slow rate, and most importantly, sought to be removed from each other on the maximum distance. In this case, their concentration everywhere on the Earth as a result would be 1 unit on the cube meter (which makes them unrecognazible). And only after it they would start to proliferate intensely, simultaneously creating nanorobots soldiers who did not replicate, but attack the defensive system. In doing so, they first have to suppress protection systems, like AIDS. Or as a modern computer viruses switches off the antivirus. Creators of the future ecofagy must understand it. As the second phase of rapid growth begins everywhere on the surface of the Earth, then it would be impossible to apply the tools of destruction such as nuclear strikes or aimed rays, as this would mean the death of the planet in any case — and simply would not be in store enough bombs.
3) The authors overestimate the reliability of protection systems. Any system has a control center, which is a blank spot. The authors implicitly assume that any person with a certain probability can suddenly become terrorist willing to destroy the world (and although the probability is very small, a large number of people living on Earth make it meaningful). But because such a system will be managed by people, they may also want to destroy the world. Nanoshield could destroy the entire world after one erroneous command. (Even if the AI manages it, we cannot say a priori that the AI cannot go mad.) The authors believe that multiple overlapping of Nanoshield protection from hackers will make it 100 % safe, but no known computer system is 100 % safe – but all major computer programs were broken by hackers, including Windows and IPod.
4) Nanoshield could develop something like autoimmunity reaction. The author’s idea that it is possible to achieve 100 % reliability by increasing the number of control systems is very superficial, as well as the more complex is the system, the more difficult is to calculate all the variants of its behavior, and the more likely it will fail in the spirit of the chaos theory.
5) Each cubic meter of oceanic water contains 77 million living beings (on the northern Atlantic, as the book «Zoology of Invertebrates» tells). Hostile ecofages can easily camouflage under natural living beings, and vice versa; the ability of natural living beings to reproduce, move and emit heat will significantly hamper detection of ecofages, creating high level of false alarms. Moreover, ecofages may at some stage in their development be fully biological creatures, where all blueprints of nanorobot will be recorded in DNA, and thus be almost no distinguishable from the normal cell.
6) There are significant differences between ecofages and computer viruses. The latter exist in the artificial environment that is relatively easy to control — for example, turn off the power, get random access to memory, boot from other media, antivirus could be instantaneous delivered to any computer. Nevertheless, a significant portion of computers were infected with a virus, but many users are resigned to the presence of a number of malware on their machines, if it does not slow down much their work.
7) Compare: Stanislaw Lem wrote a story “Darkness and mold” with main plot about ecofages.
8 ) The problem of Nanoshield must be analyzed dynamically in time — namely, the technical perfection of Nanoshield should precede technical perfection of nanoreplikators in any given moment. From this perspective, the whole concept seems very vulnerable, because to create an effective global Nanoshield require many years of development of nanotechnology — the development of constructive, and political development — while creating primitive ecofages capable, however, completely destroy the biosphere, is required much less effort. Example: Creating global missile defense system (ABM – still not exist) is much more complex technologically and politically, than the creation of intercontinental nuclear missiles.
9) You should be aware that in the future will not be the principal difference between computer viruses and biological viruses and nanorobots — all them are information, in case of availability of any «fabs» which can transfer information from one carrier to another. Living cells could construct nanorobots, and vice versa; spreading over computer networks, computer viruses can capture bioprinters or nanofabs and force them to perform dangerous bioorganizms or nanorobots (or even malware could be integrated into existing computer programs, nanorobots or DNA of artificial organisms). These nanorobots can then connect to computer networks (including the network which control Nanoshield) and send their code in electronic form. In addition to these three forms of the virus: nanotechnology, biotechnology and computer, are possible other forms, for example, cogno — that is transforming the virus in some set of ideas in the human brain which push the man to re-write computer viruses and nanobots. Idea of “hacking” is now such a meme.
10) It must be noted that in the future artificial intelligence will be much more accessible, and thus the viruses would be much more intelligent than today’s computer viruses, also applies to nanorobots: they will have a certain understanding of reality, and the ability to quickly rebuild itself, even to invent its innovative design and adapt to new environments. Essential question of ecofagy is whether individual nanorobots are independent of each other, as the bacteria cells, or they will act as a unified army with a single command and communication systems. In the latter case, it is possible to intercept the management of hostile army ecofages.
11) All that is suitable to combat ecofagy, is suitable as a defensive (and possibly offensive) weapons in nanowar.
12) Nanoshield is possible only as global organization. If there is part of the Earth which is not covered by it, Nanoshield will be useless (because there nanorobots will multiply in such quantities that it would be impossible to confront them). It is an effective weapon against people and organizations. So, it should occur only after full and final political unification of the globe. The latter may result from either World War for the unification of the planet, either by merging of humanity in the face of terrible catastrophes, such as flash of ecofagy. In any case, the appearance of Nanoshield must be preceded by some accident, which means a great chance of loss of humanity.
13) Discovery of «cold fusion» or other non-conventional energy sources will make possible much more rapid spread of ecofagy, as they will be able to live in the bowels of the earth and would not require solar energy.
14) It is wrong to consider separately self-replicating and non-replitcating nanoweapons. Some kinds of ecofagy can produce nano-soldiers attacking and killing all life. (This ecofagy can become a global tool of blackmail.) It has been said that to destroy all people on the Earth can be enough a few kilograms of nano-soldiers. Some kinds of ecofagy in early phase could dispersed throughout the world, very slowly and quietly multiply and move, and then produce a number of nano-soldiers and attack humans and defensive systems, and then begin to multiply intensively in all areas of the globe. But man, stuffed with nano-medicine, can resist attack of nanosoldier as well as medical nanorobots will be able to neutralize any poisons and tears arteries. In this small nanorobot must attack primarily informational, rather than from a large selection of energy.
15) Did the information transparency mean that everyone can access code of dangerous computer virus, or description of nanorobot-ecofage? A world where viruses and knowledge of mass destruction could be instantly disseminated through the tools of information transparency is hardly possible to be secure. We need to control not only nanorobots, but primarily persons or other entities which may run ecofagy. The smaller is the number of these people (for example, scientists-nanotechnologist), the easier would be to control them. On the contrary, the diffusion of knowledge among billions of people will make inevitable emergence of nano-hackers.
16) The allegation that the number of creators of defense against ecofagy will exceed the number of creators of ecofagy in many orders of magnitude, seems doubtful, if we consider an example of computer viruses. Here we see that, conversely, the number of virus writers in the many orders of magnitude exceeds the number of firms and projects on anti-virus protection, and moreover, the majority of anti-virus systems cannot work together as they stops each other. Terrorists may be masked by people opposing ecofagy and try to deploy their own system for combat ecofagy, which will contain a tab that allows it to suddenly be reprogrammed for the hostile goal.
17) The text implicitly suggests that Nanoshield precedes to the invention of self improving AI of superhuman level. However, from other prognosis we know that this event is very likely, and most likely to occur simultaneously with the flourishing of advanced nanotechnology. Thus, it is not clear in what timeframe the project Nanoshield exist. The developed artificial intelligence will be able to create a better Nanoshield and Infoshield, and means to overcome any human shields.
18) We should be aware of equivalence of nanorobots and nanofabrics — first can create second, and vice versa. This erases the border between the replicating and non-replicating nanomachines, because a device not initially intended to replicate itself can construct somehow nanorobot or to reprogram itself into capable for replication nanorobot.

Abstract

What counts as rational development and commercialization of a new technology—especially something as potentially wonderful (and dangerous) as nanotechnology? A recent newsletter of the EU nanomaterials characterization group NanoCharM got me thinking about this question. Several authors in this newsletter advocated, by a variety of expressions, a rational course of action. And I’ve heard similar rhetoric from other camps in the several nanoscience and nanoengineering fields.

We need a sound way of characterizing nanomaterials, and then an account of their fate and transport, and their novel properties. We need to understand the bioactivity of nanoparticles, and their effect in the environments where they may end up. We need to know what kinds of nanoparticles occur naturally, which are incidental to other engineering processes, and which we can engineer de novo to solve the world’s problems—and to fill some portion of the world’s bank accounts. We need life-cycle analyses, and toxicity and exposure studies, and cost-benefit analyses. It’s just the rational way to proceed. Well who could argue with that?

Article

What counts as rational development and commercialization of a new technology—especially something as potentially wonderful (and dangerous) as nanotechnology? A recent newsletter of the EU nanomaterials characterization group NanoCharM got me thinking about this question. Several authors in this newsletter advocated, by a variety of expressions, a rational course of action. And I’ve heard similar rhetoric from other camps in the several nanoscience and nanoengineering fields.

We need a sound way of characterizing nanomaterials, and then an account of their fate and transport, and their novel properties. We need to understand the bioactivity of nanoparticles, and their effect in the environments where they may end up. We need to know what kinds of nanoparticles occur naturally, which are incidental to other engineering processes, and which we can engineer de novo to solve the world’s problems—and to fill some portion of the world’s bank accounts. We need life-cycle analyses, and toxicity and exposure studies, and cost-benefit analyses. It’s just the rational way to proceed. Well who could argue with that?

Leaving aside the lunatic fringe—those who would charge ahead guns (or labs) a-blazing—I suspect that there is broad but shallow agreement on and advocacy of the rational development of nanotechnology. That is, what is “rational” to the scientists might not be “rational” to many commercially oriented engineers, but each group would lay claim to the “rational” high ground. Neither conception of rational action is likely to be assimilated easily to the one shared by many philosophers and ethicists who, like me, have become fascinated by ethical issues in nanotechnology. And when it comes to rationality, philosophers do like to take the high ground but don’t always agree where it is to be found—except under one’s own feet. Standing on the top of the Himalayan giant K2, one may barely glimpse the top of Everest.

So in the spirit of semantic housekeeping, I’d like to introduce some slightly less abstract categories, to climb down from the heights of rationality and see if we might better agree (and more perspicuously disagree) on what to think and what to do about nanotechnology. At the risk of clumping together some altogether disparate researchers, I will posit that the three fields mentioned above—science, engineering, and philosophy—want different things from their “rational” courses of action.

The scientists, especially the academics, want knowledge of fundamental structures and processes of nanoparticles. They want to fit this knowledge into existing accounts of larger-scale particles in physics, chemistry, and biology. Or they want to understand how engineered and natural nanoparticles challenge those accounts. They want to understand why these particles have the causal properties that they do. Prudent action, from the scientific point of view, requires that we not change the received body of knowledge called science until we know what we’re talking about.

The engineers (with apologies here to academic engineers who are more interested in knowledge-creation than product-creation) want to make things and solve problems. Prudence on their view involves primarily ends-means or instrumental rationality. To pursue the wrong means to an end—for instance, to try to construct a new macro-level material from a supposed stock of a particular engineered nanoparticle, without a characterization or verification of what counts as one of those particles—is just wasted effort. For the engineers, wasted effort is a bad thing, since there are problems that want solutions, and solutions (especially to public health and environmental problems) are time sensitive. Some of these problems have solutions that are non-nanotech, and the market rewards the first through the gate. But the engineers don’t need a complete scientific understanding of nanoparticles to forge ahead with efforts. As Henry Petroski recently said in the Washington Post (1/25/09), “[s]cience seeks to understand the world as it is; only engineering can change it.”

The philosophers are of course a more troublesome lot. Prudence on their view takes on a distinctly moral tinge, but they recognize the other forms too. Philosophers are mostly concerned with the goodness of the ends pursued by the engineers, and the power of the knowledge pursued by the scientists. Ever since von Neumann’s suggestion of the technological inevitability of scientific knowledge, some philosophers have worried that today’s knowledge, set aside perhaps because of excessive risks, can become tomorrow’s disastrous products.

The key disagreement, though, is between the engineers and the philosophers, and the central issues concern the plurality of good ends, and the incompatibility of some of them with others. For example, it is certainly a good end to have clean drinking water worldwide today, and we might move towards that end by producing filtration systems with nanoscale silver or some other product. It is also a good end to have healthy aquatic ecosystems today, and to have viable fisheries tomorrow, and future people to benefit from them. These ends may not all be compatible. When we add up the good ends over many scales, the balancing problem becomes almost insurmountable. Just consider a quick accounting: today’s poor, many of whom will die from water-born disease; cancer patients sickened by the imprecise “cures” given to them, future people whose access to clean water and sustainable forms of energy hang in the balance. We could go on.

When we think about these three fields and their allegedly separate conceptions of prudent action, it becomes clear that their conceptions of prudence can be held by one and the same person, without fear of multiple personality disorder. Better, then, to consider these scientific, engineering, and philosophical mindsets, which are held in greater or lesser concentrations by many researchers. That they are held in different concentrations by the collective consciousness of the nanotechnology field is manifest, it seems, by the disagreement over the right principle of action to follow.

I don’t want to “psychologize” or explain away the debate over principles here, but isn’t it plausible to think that advocates of the Precautionary Principle have the philosophical mindset to a great degree, and so they believe that catastrophic harm to future generations isn’t worth even a very small risk? That is because they count the good ends to be lost as greater in number (and perhaps in goodness) than the good ends to be gained.

Those of the engineering mindset, on the other hand, want to solve problems for people living now, and they might not worry so much about future problems and future populations. They are apt to prefer a straightforward Cost-Benefit Principle, with serious discounting of future costs. The future, after all, will have their own engineers, and a new set of tools for the problems they face. Of course, those of us alive today will in large part create the problems faced by those future people. But we will also bequeath to them our science and engineering.

I’d like to offer a conjecture at this point about the basic insolubility of tensions between the scientific, engineering, and philosophical mindsets and their conceptions of prudent action. The conjecture is inspired by the Impossibility Theorem of the Nobel Prize winning economist Kenneth Arrow, but only informally resembles his brilliant conclusion. In a nutshell, it is this. If we believe that the nanotechnology field has to aggregate preferences for prudential action over these three mindsets, where there are multiple choices to be made over development and commercialization of nanotechnology’s products, we will not come to agreement on what counts as prudent action. This conjecture owes as much to the incommensurability of various good ends, and the means to achieve them, as it does to the kind of voting paradox of which Arrow’s is just one example.

If I am right in this conjecture, we shouldn’t be compelled to try to please all of the people all of the time. Once we give up on this “everyone wins” mentality, perhaps we can get on with the business of making difficult choices that will create different winners and losers, both now and in the future. Perhaps we will also get on with the very difficult task of achieving a comprehensive understanding of the goals of science, engineering, and ethics.

Thomas M. Powers, PhD
Director—Science, Ethics, and Public Policy Program
and
Assistant Professor of Philosophy
University of Delaware

Sometimes what may save your life can come from the most unsuspecting places. Then sometimes, what can save your life in one circumstance may be highly risky, or at least technologically premature, in another. Lifeboat Foundation is about making those distinctions regarding emerging technologies and knowing the difference.

MIT scientists from the Institute for Soldier Nanotechnologies announced in January 2007 they had reached an elusive engineering milestone. They had successfully created a synthetic material with the same properties of spider silk.1 The combination of elasticity and strength of spider silk has been a long sought after target for synthetic manufacturing for improving materials as diverse as packaging, clothing, and medical devices. Using tiny clay disks approximately one billionth of a meter, these nanocrystals combined with rubber polymer create the stretchy but strong polymer nanocomposite.

The use of nanocomposites for the production of packaging materials or clothing seems to be a relatively safe and non-controversial because materials remain outside the body. The United States military has already indicated, according to one source, their desire to use the material for military uniforms and to improve packaging for those lovely-tasting MREs.2 In fact, this is why the Army-funded Institute for Soldier Nanotechnology is supporting the research—to develop pliable but tough body armor for soldiers in combat. Moreover, imagine, for example, a garbage bag that could hold an anvil without breaking. The commercial applications may be endless—but there should be real concern regarding the ways in which these materials might be introduced into human bodies.

Although this synthetic spider silk may conjure up images of one day being able to have the capabilities of Peter Parker or unbreakable, super-strength bones, there are some real concerns regarding the potential applications of this technology, particularly for medical purposes. Some have argued that polymer nanocomposite materials could be used as the mother of all Band-Aids or nearly indestructible stents. For hundreds of years, spider silks have been thought to have great potential for wound covering. In general, nanocomposite materials have been heralded for medical applications as diverse as bone grafts to antimicrobial surfaces for medical instruments.

While it would be ideal to have a nanocomposite that is both flexible and tough for use in bone replacements and grafts, the concern is that the in vivo use of these materials might affect the integrity and properties of the material. Moreover, what happens when the nano-stent begins to break down? Would we be able to detect nano-sized clay particles breaking away from a wound cover and rushing under the skin or racing through our blood stream from a nano-stent? Without the ability to monitor the integrity of such a device and given the fact that the composite materials of such interventions are smaller than 1000th the size of a human hair, should we really be moving toward introducing such materials into human bodies? The obvious answer is that without years of clinical trials in humans such clinical applications cannot, and will not, happen.

Although the spider silk synthetic would be ideal for certain applications, medical products ideally would be made out of biodegradable materials. This polymer nanocomposite made of clay is not. Thus, although the MIT scientists have proved the concept of polymer nanocomposites that possess the properties of spider silk, they not conclusively shown that these would be useful for certain biomedical interventions until they have completed human clinical trials which could be 5–10 years in the future.

In the meantime, however, such scientific advances should be applied to those material science problems just like the ones being addressed at the MIT Institute for Soldier Nanotechnologies. Nanomaterials used exterior to the human body or for improving consumer products are an important developments in applied nanotechnologies. They can, and will, improve the lives of service men and women, once their safety and efficacy in real world environments are tested, and eventually improve consumer products as well.

So the next time you see a spider in the corner rather than smashing it into oblivion, you may just want to look at it for a moment and say “Thank you”. (And then run, if you wish.) But stay tuned…medical applications will some day come as well. Some day a spider may just save your life.

Summer Johnson, PhD
Member, Lifeboat Foundation and Nanoethics Columnist for Nanotech-Now.com and Lifeboat Foundation

Executive Managing Editor, The American Journal of Bioethics

1. MIT News. January 17th, 2007. Nanocomposite Research Yields Strong But Stretchy Fibers

2. NanoScienceWorks. MIT Nanocomposite Research Yields Lycra-like Fibers — Strong and Stretchy Material Inspired by Spider Silk

I wrote an essay on the theme of the possibility of artificial initiation and fusion explosion of giants planets and other objects of Solar system. It is not a scientific article, but an atempt to collect all nesessary information about this existential risk. I conclude that it could not be ruled out as technical possibility, and could be made later as act of space war, which could clean entire Solar system.

Where are some events which are very improbable, but which consequence could be infinitely large (e.g. black holes on LHC.) Possibility of nuclear ignition of self-containing fusion reaction in giant planets like Jupiter and Saturn which could lead to the explosion of the planet, is one of them.

Inside the giant planets is thermonuclear fuel under high pressure and at high density. This density for certain substances is above (except water, perhaps) than the density of these substances on Earth. Large quantities of the substance would not have fly away from reaction zone long enough for large energy relize. This fuel has never been involved in fusion reactions, and it remained easy combustible components, namely, deuterium, helium-3 and lithium, which have burned at all in the stars. In addition, the subsoil giant planets contain fuel for reactions, which may prompt an explosive fire — namely, the tri-helium reaction (3 He 4 = C12) and for reactions to the accession of hydrogen to oxygen, which, however, required to start them much higher temperature. Substance in the bowels of the giant planets is a degenerate form of a metal sea, just as the substance of white dwarfs, which regularly takes place explosive thermonuclear burning in the form of helium flashes and the flashes of the first type of supernova.
The more opaque is environment, the greater are the chances for the reaction to it, as well as less scattering, but in the bowels of the giant planets there are many impurities and can be expected to lower transparency. Gravitational differentiation and chemical reactions can lead to the allocation of areas within the planet that is more suitable to run the reaction in its initial stages.

The stronger will be an explosion of fuse, the greater will be amount of the initial field of burning, and the more likely that the response would be self-sustaining, as the energy loss will be smaller and the number of reaction substances and reaction times greater. It can be assumed that if at sufficiently powerful fuse the reaction will became self-sustaining.

Recently Galileo spacecraft was drawn in the Jupiter. Galileo has nuclear pellets with plutonium-238 which under some assumption could undergo chain reaction and lead to nuclear explosion. It is interesting to understand if it could lead to the explosion of giant planet. Spacecraft Cassini may soon enter Saturn with unknown consequences. In the future deliberate ignition of giant planet may become a mean of space war. Such event could sterilize entire Solar system.

Scientific basis for our study could be found in the article “Necessary conditions for the initiation and propagation of nuclear detonation waves in plane atmospheras”.
Tomas Weaver and A. Wood, Physical review 20 – 1 Jule 1979,
http://www.lhcdefense.org/pdf/LHC%20-%20Sancho%20v.%20Doe%20-%20Atmosphere%20Ignition%20-%202%20-%20Wood_AtmIgnition-1.pdf

It rejected the possibility of extending the thermonuclear detonation in the Earth’s atmosphere in Earth’s oceans to balance the loss of radiation (one that does not exclude the possibility of reactions, which take little space comparing the amount of earthly matter — but it’s enough to disastrous consequences and human extinction.)

There it is said: “We, therefore, conclude that thermonuclear-detonation waves cannot propagate in the terrestrial ocean by any mechanism by an astronomically large margin.

It is worth noting, in conclusion, that the susceptability to thermonuclear detonation of a large body of hydrogenous material is an ex¬ceedingly sensitive function of its isotopic com¬position, and, specifically, to the deuterium atom fraction, as is implicit in the discussion just preceding. If, for instance, the terrestrial oceans contained deuterium at any atom fraction greater than 1:300 (instead of the actual value of 1: 6000), the ocean could propagate an equilibrium thermonuclear-detonation wave at a temperature £2 keV (although a fantastic 10**30 ergs—2 x 10**7 MT, or the total amount of solar energy incident on the Earth for a two-week period—would be required to initiate such a detonation at a deuter¬*ium concentration of 1: 300). Now a non-neg-ligible fraction of the matter in our own galaxy exists at temperatures much less than 300 °K, i.e., the gas-giant planets of our stellar system, nebulas, etc. Furthermore, it is well known that thermodynamically-governed isotopic fractionation ever more strongly favors higher relative concentration of deuterium as the temperature decreases, e.g., the D:H concentration ratio in the ~10**2 К Great Nebula in Orion is about 1:200.45 Finally, orbital velocities of matter about the galactic center of mass are of the order of 3 x 10**7 cm /sec at our distance from the galactic core.

It is thus quite conceivable that hydrogenous matter (e.go, CH4, NH3, H2O, or just H2) relatively rich in deuterium (1 at. %) could accumulate at its normal, zero-pressure density in substantial thicknesses or planetary surfaces, and such layering might even be a fairly common feature of the colder, gas-giant planets. If thereby highly enriched in deuterium (£10 at. %), thermonuclear detonation of such layers could be initiated artificially with attainable nuclear explosives. Even with deuterium atom fractions approaching 0.3 at. % (less than that observed over multiparsec scales in Orion), however, such layers might be initiated into propagating thermonuclear detonation by the impact of large (diam 10**2 m), ultra-high velocity (^Зх 10**7 cm/sec) meteors or comets originating from nearer the galactic center. Such events, though exceedingly rare, would be spectacularly visible on distance scales of many parsecs.”

Full text of my essay is here: http://www.scribd.com/doc/8299748/Giant-planets-ignition

November 14, 2008
Computer History Museum, Mountain View, CA

http://ieet.org/index.php/IEET/eventinfo/ieet20081114/

Organized by: Institute for Ethics and Emerging Technologies, the Center for Responsible Nanotechnology and the Lifeboat Foundation

A day-long seminar on threats to the future of humanity, natural and man-made, and the pro-active steps we can take to reduce these risks and build a more resilient civilization. Seminar participants are strongly encouraged to pre-order and review the Global Catastrophic Risks volume edited by Nick Bostrom and Milan Cirkovic, and contributed to by some of the faculty for this seminar.

This seminar will precede the futurist mega-gathering Convergence 08, November 15–16 at the same venue, which is co-sponsored by the IEET, Humanity Plus (World Transhumanist Association), the Singularity Institute for Artificial Intelligence, the Immortality Institute, the Foresight Institute, the Long Now Foundation, the Methuselah Foundation, the Millenium Project, Reason Foundation and the Accelerating Studies Foundation.

SEMINAR FACULTY

  • Nick Bostrom Ph.D., Director, Future of Humanity Institute, Oxford University
  • Jamais Cascio, research affiliate, Institute for the Future
  • James J. Hughes Ph.D., Exec. Director, Institute for Ethics and Emerging Technologies
  • Mike Treder, Executive Director, Center for Responsible Nanotechnology
  • Eliezer Yudkowsky, Research Associate. Singularity Institute for Artificial Intelligence
  • William Potter Ph.D., Director, James Martin Center for Nonproliferation Studies

REGISTRATION:
Before Nov 1: $100
After Nov 1 and at the door: $150

Cross posted from Nextbigfuture

Click for larger image

I had previously looked at making two large concrete or nanomaterial monolithic or geodesic domes over cities which could protect a city from nuclear bombs.

Now Alexander Bolonkin has come up with a cheaper, technological easy and more practical approach with thin film inflatable domes. It not only would provide protection form nuclear devices it could be used to place high communication devices, windmill power and a lot of other money generating uses. The film mass covered of 1 km**2 of ground area is M1 = 2×10**6 mc = 600 tons/km**2 and film cost is $60,000/km**2.
The area of big city diameter 20 km is 314 km**2. Area of semi-spherical dome is 628 km2. The cost of Dome cover is 62.8 millions $US. We can take less the overpressure (p = 0.001atm) and decrease the cover cost in 5 – 7 times. The total cost of installation is about 30–90 million $US. Not only is it only about $153 million to protect a city it is cheaper than a geosynchronous satellite for high speed communications. Alexander Bolonkin’s website

The author suggests a cheap closed AB-Dome which protects the densely populated cities from nuclear, chemical, biological weapon (bombs) delivered by warheads, strategic missiles, rockets, and various incarnations of aviation technology. The offered AB-Dome is also very useful in peacetime because it shields a city from exterior weather and creates a fine climate within the ABDome. The hemispherical AB-Dome is the inflatable, thin transparent film, located at altitude up to as much as 15 km, which converts the city into a closed-loop system. The film may be armored the stones which destroy the rockets and nuclear warhead. AB-Dome protects the city in case the World nuclear war and total poisoning the Earth’s atmosphere by radioactive fallout (gases and dust). Construction of the AB-Dome is easy; the enclosure’s film is spread upon the ground, the air pump is turned on, and the cover rises to its planned altitude and supported by a small air overpressure. The offered method is cheaper by thousand times than protection of city by current antirocket systems. The AB-Dome may be also used (height up to 15 and more kilometers) for TV, communication, telescope, long distance location, tourism, high placed windmills (energy), illumination and entertainments. The author developed theory of AB-Dome, made estimation, computation and computed a typical project.

His idea is a thin dome covering a city with that is a very transparent film 2 (Fig.1). The film has thickness 0.05 – 0.3 mm. One is located at high altitude (5 — 20 km). The film is supported at this altitude by a small additional air pressure produced by ground ventilators. That is connected to Earth’s ground by managed cables 3. The film may have a controlled transparency option. The system can have the second lower film 6 with controlled reflectivity, a further option.

The offered protection defends in the following way. The smallest space warhead has a
minimum cross-section area 1 m2 and a huge speed 3 – 5 km/s. The warhead gets a blow and overload from film (mass about 0.5 kg). This overload is 500 – 1500g and destroys the warhead (see computation below). Warhead also gets an overpowering blow from 2 −5 (every mass is 0.5 — 1 kg) of the strong stones. Relative (about warhead) kinetic energy of every stone is about 8 millions of Joules! (It is in 2–3 more than energy of 1 kg explosive!). The film destroys the high speed warhead (aircraft, bomber, wing missile) especially if the film will be armored by stone.

Our dome cover (film) has 2 layers: top transparant layer 2, located at a maximum altitude (up 5 −20 km), and lower transparant layer 4 having control reflectivity, located at altitude of 1–3 km (option). Upper transparant cover has thickness about 0.05 – 0.3 mm and supports the protection strong stones (rebbles) 8. The stones have a mass 0.2 – 1 kg and locate the step about 0.5 m.

If we want to control temperature in city, the top film must have some layers: transparant dielectric layer, conducting layer (about 1 — 3 microns), liquid crystal layer (about 10 — 100 microns), conducting layer (for example, SnO2), and transparant dielectric layer. Common thickness is 0.05 — 0.5 mm. Control voltage is 5 — 10 V. This film may be produced by industry relatively cheaply.

If some level of light control is needed materials can be incorporated to control transparency. Also, some transparent solar cells can be used to gather wide area solar power.


As you see the 10 kt bomb exploded at altitude 10 km decreases the air blast effect about in 1000
times and thermal radiation effect without the second cover film in 500 times, with the second reflected film about 5000 times. The hydrogen 100kt bomb exploded at altitude 10 km decreases the air blast effect about in 10 times and thermal radiation effect without the second cover film in 20 times, with the second reflected film about 200 times. Only power 1000kt thermonuclear (hydrogen) bomb can damage city. But this damage will be in 10 times less from air blast and in 10 times less from thermal radiation. If the film located at altitude 15 km, the
damage will be in 85 times less from the air blast and in 65 times less from the thermal radiation.
For protection from super thermonuclear (hydrogen) bomb we need in higher dome altitudes (20−30 km and more). We can cover by AB-Dome the important large region and full country.

Because the Dome is light weight it could be to stay in place even with very large holes. Multiple shells of domes could still be made for more protection.

Better climate inside a dome can make for more productive farming.

AB-Dome is cheaper in hundreds times then current anti-rocket systems.
2. AB-Dome does not need in high technology and can build by poor country.
3. It is easy for building.
4. Dome is used in peacetime; it creates the fine climate (weather) into Dome.
5. AB-Dome protects from nuclear, chemical, biological weapon.
6. Dome produces the autonomous existence of the city population after total World nuclear war
and total confinement (infection) all planet and its atmosphere.
7. Dome may be used for high region TV, for communication, for long distance locator, for
astronomy (telescope).
8. Dome may be used for high altitude tourism.
9. Dome may be used for the high altitude windmills (getting of cheap renewable wind energy).
10. Dome may be used for a night illumination and entertainment

Cross posted from Next big future by Brian Wang, Lifeboat foundation director of Research

I am presenting disruption events for humans and also for biospheres and planets and where I can correlating them with historical frequency and scale.

There has been previous work on categorizing and classifying extinction events. There is Bostroms paper and there is also the work by Jamais Cascio and Michael Anissimov on classification and identifying risks (presented below).

A recent article discusses the inevtiable “end of societies” (it refers to civilizations but it seems to be referring more to things like the end of the roman empire, which still ends up later with Italy, Austria Hungary etc… emerging)

The theories around complexity seem me that to be that core developments along connected S curves of technology and societal processes cap out (around key areas of energy, transportation, governing efficiency, agriculture, production) and then a society falls back (soft or hard dark age, reconstitutes and starts back up again).

Here is a wider range of disruption. Which can also be correlated to frequency that they have occurred historically.

High growth drop to Low growth (short business cycles, every few years)
Recession (soft or deep) Every five to fifteen years.
Depressions (50−100 years, can be more frequent)

List of recessions for the USA (includes depressions)

Differences recession/depression

Good rule of thumb for determining the difference between a recession and a depression is to look at the changes in GNP. A depression is any economic downturn where real GDP declines by more than 10 percent. A recession is an economic downturn that is less severe. By this yardstick, the last depression in the United States was from May 1937 to June 1938, where real GDP declined by 18.2 percent. Great Depression of the 1930s can be seen as two separate events: an incredibly severe depression lasting from August 1929 to March 1933 where real GDP declined by almost 33 percent, a period of recovery, then another less severe depression of 1937–38. (Depressions every 50–100 years. Were more frequent in the past).

Dark age (period of societal collapse, soft/light or regular)
I would say the difference between a long recession and a dark age has to do with breakdown of societal order and some level of population decline / dieback, loss of knowledge/education breakdown. (Once per thousand years.)

I would say that a soft dark age is also something like what China had from the 1400’s to 1970.
Basically a series of really bad societal choices. Maybe something between depressions and dark age or something that does not categorize as neatly but an underperformance by twenty times versus competing groups. Perhaps there should be some kind of societal disorder, levels and categories of major society wide screw ups — historic level mistakes. The Chinese experience I think was triggered by the renunciation of the ocean going fleet, outside ideas and tech, and a lot of other follow on screw ups.

Plagues played a part in weakening the Roman and Han empires.

Societal collapse talk which includes Toynbee analysis.

Toynbee argues that the breakdown of civilizations is not caused by loss of control over the environment, over the human environment, or attacks from outside. Rather, it comes from the deterioration of the “Creative Minority,” which eventually ceases to be creative and degenerates into merely a “Dominant Minority” (who forces the majority to obey without meriting obedience). He argues that creative minorities deteriorate due to a worship of their “former self,” by which they become prideful, and fail to adequately address the next challenge they face.

My take is that the Enlightenment would strengthened with a larger creative majority, where everyone has a stake and capability to creatively advance society. I have an article about who the elite are now.

Many now argue about how dark the dark ages were not as completely bad as commonly believed.
The dark ages is also called the Middle Ages

Population during the middle ages

Between dark age/social collapse and extinction. There are levels of decimation/devastation. (use orders of magnitude 90+%, 99%, 99.9%, 99.99%)

Level 1 decimation = 90% population loss
Level 2 decimation = 99% population loss
Level 3 decimation = 99.9% population loss

Level 9 population loss (would pretty much be extinction for current human civilization). Only 6–7 people left or less which would not be a viable population.

Can be regional or global, some number of species (for decimation)

Categorizations of Extinctions, end of world categories

Can be regional or global, some number of species (for extinctions)

== The Mass extinction events have occurred in the past (to other species. For each species there can only be one extinction event). Dinosaurs, and many others.

Unfortunately Michael’s accelerating future blog is having some issues so here is a cached link.

Michael was identifying manmade risks
The Easier-to-Explain Existential Risks (remember an existential risk
is something that can set humanity way back, not necessarily killing
everyone):

1. neoviruses
2. neobacteria
3. cybernetic biota
4. Drexlerian nanoweapons

The hardest to explain is probably #4. My proposal here is that, if
someone has never heard of the concept of existential risk, it’s
easier to focus on these first four before even daring to mention the
latter ones. But here they are anyway:

5. runaway self-replicating machines (“grey goo” not recommended
because this is too narrow of a term)
6. destructive takeoff initiated by intelligence-amplified human
7. destructive takeoff initiated by mind upload
8. destructive takeoff initiated by artificial intelligence

Another classification scheme: the eschatological taxonomy by Jamais
Cascio on Open the Future. His classification scheme has seven
categories, one with two sub-categories. These are:

0:Regional Catastrophe (examples: moderate-case global warming,
minor asteroid impact, local thermonuclear war)
1: Human Die-Back (examples: extreme-case global warming,
moderate asteroid impact, global thermonuclear war)
2: Civilization Extinction (examples: worst-case global warming,
significant asteroid impact, early-era molecular nanotech warfare)
3a: Human Extinction-Engineered (examples: targeted nano-plague,
engineered sterility absent radical life extension)
3b: Human Extinction-Natural (examples: major asteroid impact,
methane clathrates melt)
4: Biosphere Extinction (examples: massive asteroid impact,
“iceball Earth” reemergence, late-era molecular nanotech warfare)
5: Planetary Extinction (examples: dwarf-planet-scale asteroid
impact, nearby gamma-ray burst)
X: Planetary Elimination (example: post-Singularity beings
disassemble planet to make computronium)

A couple of interesting posts about historical threats to civilization and life by Howard Bloom.

Natural climate shifts and from space (not asteroids but interstellar gases).

Humans are not the most successful life, bacteria is the most successful. Bacteria has survived for 3.85 billion years. Humans for 100,000 years. All other kinds of life lasted no more than 160 million years. [Other species have only managed to hang in there for anywhere from 1.6 million years to 160 million. We humans are one of the shortest-lived natural experiments around. We’ve been here in one form or another for a paltry two and a half million years.] If your numbers are not big enough and you are not diverse enough then something in nature eventually wipes you out.

Following the bacteria survival model could mean using transhumanism as a survival strategy. Creating more diversity to allow for better survival. Humans adapted to living under the sea, deep in the earth, in various niches in space, more radiation resistance,non-biological forms etc… It would also mean spreading into space (panspermia). Individually using technology we could become very successful at life extension, but it will take more than that for a good plan for human (civilization, society, species) long term survival planning.

Other periodic challenges:
142 mass extinctions, 80 glaciations in the last two million years, a planet that may have once been a frozen iceball, and a klatch of global warmings in which the temperature has soared by 18 degrees in ten years or less.

In the last 120,000 years there were 20 interludes in which the temperature of the planet shot up 10 to 18 degrees within a decade. Until just 10,000 years ago, the Gulf Stream shifted its route every 1,500 years or so. This would melt mega-islands of ice, put out our coastal cities beneath the surface of the sea, and strip our farmlands of the conditions they need to produce the food that feeds us.

The solar system has a 240-million-year-long-orbit around the center of our galaxy, an orbit that takes us through interstellar gas clusters called local fluff, interstellar clusters that strip our planet of its protective heliosphere, interstellar clusters that bombard the earth with cosmic radiation and interstellar clusters that trigger giant climate change.

Many of you have recently read that a research team at the University of Illinois led by Min-Feng Yu has developed a process to grow nanowires of unlimited length. The same process also allows for the construction of complex, three-dimensional nanoscale structures. If this is news to you, please refer to the links below.

It’s easy to let this news item slip past before its implications have a chance to sink in.

Professor Yu and his team have shown us a glimpse of how to make nanowire based materials that will, once the technology is developed more fully, allow for at least two very significant enhancements in materials science.

1. Nanowires that will be as long as we want them to be. The only limitations that seem to be indicated are the size of the “ink” reservoir and the size of spool that the nanowires are wound on. Scale up the ink supply and the scale up size of the spool and we’ll soon be making cables and fabric. Make the cables long enough and braid enough of them them together and the Space Elevator Games may become even more exciting to watch.

2. It should also lend itself very nicely to 3D printing of complex nanoscale structures. Actually building components that will allow for the bootstrapping of a desktop sized molecular manufacturing fab seems like it’s a lot closer than it was just a short time ago.

All of this highlights the need to more richly fund the Lifeboat Foundation in general and the Lifeboat Foundation’s NanoShield program in particular so that truly transformative technologies like these can be brought to market in a way that minimizes the risks of their powers being used for ill.

If you can, please consider donating to the Lifeboat Foundation. Every dollar helps us to safely bring a better world into being. The species you help save may be your own.

References:
http://www.news.uiuc.edu/news/08/0130nanofiber.html
http://www.sciencedaily.com/releases/2008/01/080130101732.htm
http://www3.interscience.wiley.com/cgi-bin/fulltext/117901964/PDFSTART

The Defense Advanced Research Projects Agency (DARPA) gave a $540,000 grant to researchers from Rice University to do a fast-tracked 9-month study on a new anti-radiation drug based on carbon nanotubes:

“More than half of those who suffer acute radiation injury die within 30 days, not from the initial radioactive particles themselves but from the devastation they cause in the immune system, the gastrointestinal tract and other parts of the body,” said James Tour, Rice’s Chao Professor of Chemistry, director of Rice’s Carbon Nanotechnology Laboratory (CNL) and principal investigator on the grant. “Ideally, we’d like to develop a drug that can be administered within 12 hours of exposure and prevent deaths from what are currently fatal exposure doses of ionizing radiation.” […]

The new study was commissioned after preliminary tests found the drug was greater than 5,000 times more effective at reducing the effects of acute radiation injury than the most effective drugs currently available. […]

The drug is based on single-walled carbon nanotubes, hollow cylinders of pure carbon that are about as wide as a strand of DNA. To form NTH, Rice scientists coat nanotubes with two common food preservatives — the antioxidant compounds butylated hydroxyanisole (BHA) and butylated hydroxytoluene (BHT) — and derivatives of those compounds.

An interesting side benefit of the drug might be that it could also potentially help cancer patients who are undergoing radiation treatment.

More here: Feds fund study of drug that may prevent radiation injury