Toggle light / dark theme



Two of Britain’s leading environmental thinkers say it is time to develop a quick technical fix for climate change. Writing in the journal Nature, Science Museum head Chris Rapley and Gaia theorist James Lovelock suggest looking at boosting ocean take-up of CO2.

Floating pipes reaching down from the top of the ocean into colder water below move up and down with the swell.

As the pipe moves down, cold water flows up and out onto the ocean surface. A simple valve blocks any downward flow when the pipe is moving upwards.

Colder water is more “productive” — it contains more life, and so in principle can absorb more carbon.

Finally some practical solutions are being introduced to mitigate global warming. The BBC article mention the US company, Atmocean, that is already testing such a system.

Read the articles from BBC or the New York Times based on the same article from Nature.

When I read about the “Aurora Generator Test” video that has been leaked to the media I wondered “why leak it now now and who benefits.” Like many of you, I question the reasons behind any leak from an “unnamed source” inside the US Federal government to the media. Hopefully we’ll all benefit from this particular leak.

Then I thought back to a conversation I had at a trade show booth I was working in several years ago. I was speaking with a fellow from the power generation industry. He indicated that he was very worried about the security ramifications of a hardware refresh of the SCADA systems that his utility was using to control its power generation equipment. The legacy UNIX-based SCADA systems were going to be replaced by Windows based systems. He was even more very worried that the “air gaps” that historically have been used to physically separate the SCADA control networks from power company’s regular data networks might be removed to cut costs.

Thankfully on July 19, 2007 the Federal Energy Regulatory Commission proposed to the North American Electric Reliability Corporation a set of new, and much overdue, cyber security standards that will, once adopted and enforced do a lot to help make an attacker’s job a lot harder. Thank God, the people who operate the most critically important part of our national infrastructure have noticed the obvious.

Hopefully a little sunlight will help accelerate the process of reducing the attack surface of North America’s power grid.

After all, the march to the Singularity will go a lot slower without a reliable power grid.

Matt McGuirl, CISSP

A new biosensor developed at the Georgia Tech Research Institute (GTRI) can detect avian influenza in just minutes. In addition to being a rapid test, the biosensor is economical, field-deployable, sensitive to different viral strains and requires no labels or reagents.

This kind of technology could be applied to real time monitoring of other diseases as well.


Photograph of the optical biosensor that is approximately 16 millimeters by 33 millimeters in size. The horizontal purple lines are the channels on the waveguide. Credit: Gary Meek

“We can do real-time monitoring of avian influenza infections on the farm, in live-bird markets or in poultry processing facilities,” said Jie Xu, a research scientist in GTRI’s Electro-Optical Systems Laboratory (EOSL)

The biosensor is coated with antibodies specifically designed to capture a protein located on the surface of the viral particle. For this study, the researchers evaluated the sensitivity of three unique antibodies to detect avian influenza virus.

The sensor utilizes the interference of light waves, a concept called interferometry, to precisely determine how many virus particles attach to the sensor’s surface. More specifically, light from a laser diode is coupled into an optical waveguide through a grating and travels under one sensing channel and one reference channel.

Researchers coat the sensing channel with the specific antibodies and coat the reference channel with non-specific antibodies. Having the reference channel minimizes the impact of non-specific interactions, as well as changes in temperature, pH and mechanical motion. Non-specific binding should occur equally to both the test and reference channels and thus not affect the test results.

An electromagnetic field associated with the light beams extends above the waveguides and is very sensitive to the changes caused by antibody-antigen interactions on the waveguide surface. When a liquid sample passes over the waveguides, any binding that occurs on the top of a waveguide because of viral particle attachment causes water molecules to be displaced. This causes a change in the velocity of the light traveling through the waveguide.

There are two sides to living as long as possible: developing the technologies to cure aging, such as SENS, and preventing human extinction risk, which threatens everybody. Unfortunately, in the life extensionist community, and the world at large, the balance of attention and support is lopsided in favor of the first side of the coin, while largely ignoring the second. I see people meticulously obsessed with caloric restriction and SENS, but apparently unaware of human extinction risks. There’s the global warming movement, sure, but no efforts to address the bio, nano, and AI risks.

It’s easy to understand why. Life extension therapies are a positive and happy thing, whereas existential risk is a negative and discouraging thing. The affect heuristic causes us to shy away from negative affect, while only focusing on projects with positive affect: life extension. Egocentric biases help magnify the effect, because it’s easier to imagine oneself aging and dying than getting wiped out along with billions of others as a result of a planetary plague, for instance. Attributional biases work against both sides of the immortality coin: because there’s no visible bad guy to fight, people aren’t as juiced up as they would be, about, say, protesting a human being like Bush.

Another element working against the risk side of the coin is the assignment of credit: a research team may be the first to significantly extend human life, in which case, the team and all their supporters get bragging rights. Prevention of existential risks is a bit hazier, consisting of networks of safeguards which all contribute a little bit towards lowering the probability of disaster. Existential risk prevention isn’t likely to be the way it is in the movies, where the hero punches out the mad scientist right before he presses the red button that says “Planet Destroyer”, but because of a cooperative network of individuals working to increase safety in the diverse areas that risks could emerge from: biotech, nanotech, and AI.

Present-day immortalists and transhumanists simply don’t care enough about existential risk. Many of them are at the same stage with regards to ideological progression as most of humanity is against the specter of death: accepting, in denial, dismissive. There are few things less pleasant to contemplate than humanity destroying itself, but it must be done anyhow, because if we slip and fall, there’s no getting up.

The greatest challenge is that the likelihood of disaster per year must be decreased to very low levels — less than 0.001% or something — because otherwise the aggregate probability computed over a series of years will approach 1 at the limit. There are many risks that even distributing ourselves throughout space would do nothing to combat — rogue, space-going AI, replicators that eat asteroids and live off sunlight, agents that pursue reproduction at the exclusion of value structures such as conscious experiences. Space colonization is not our silver bullet, despite what some might think. Relying overmuch on space colonization to combat existential risk may give us a false sense of security.

Yesterday it hit the national news that synthetic life is on its way within 3 to 10 years. To anyone following the field, this comes as zero surprise, but there are many thinkers out there who might not have seen it coming. The Lifeboat Foundation, which has saw this well in advance, set up the A-Prize as an effort to bring development of artificial life out into the open, where it should be, and the A-Prize currently has a grand total of three donors: myself, Sergio Tarrero, and one anonymous donor. This is probably a result of insufficient publicity, though.

Genetically engineered viruses are a risk today. Synthetic life will be a risk in 3–10 years. AI could be a risk in 10 years, or it could be a risk now — we have no idea. The fastest supercomputers are already approximating the computing power of the human brain, but since an airplane is way less complex than a bird, we should assume that less-than-human computing power is sufficient for AI. Nanotechnological replicators, a distinct category of replicator that blurs into synthetic life at the extremes, could be a risk in 5–15 years — again, we don’t know. Better to assume they’re coming sooner, and be safe rather than sorry.

Once you realize that humanity has lived entirely without existential risks (except the tiny probability of asteroid impact) since Homo sapiens evolved over 100,000 years ago, and we’re about to be hit full-force by these new risks in the next 3–15 years, the interval between now and then is practically nothing. Ideally, we’d have 100 or 500 years of advance notice to prepare for these risks, not 3–15. But since 3–15 is all we have, we’d better use it.

If humanity continues to survive, the technologies for radical life extension are sure to be developed, taking into account economic considerations alone. The efforts of Aubrey de Grey and others may hurry it along, saving a few million lives in the process, and that’s great. But if we develop SENS only to destroy ourselves a few years later, it’s worse than useless. It’s better to overinvest in existential risk, encourage cryonics for those whose bodies can’t last until aging is defeated, and address aging once we have a handle on existential risk, which we quite obviously don’t. Remember: there will always be more people paying attention to radical life extension than existential risk, so the former won’t be losing much if you shift your focus to the latter. As fellow blogger Steven says, “You have only a small fraction of the world’s eggs; putting them all in the best available basket will help, not harm, the global egg spreading effort.”

For more on why I think fighting existential risk should be central for any life extensionist, see Immortalist Utilitarianism, written in 2004.

A point on human extinction risk analysis.

To look at existential risk rationally requires that we maintain a cool, detached perspective. It’s somewhat hard to think of how this might be done, although watching videos of planetary destruction could actually help! As a detective needs to look at a few crime scenes before he can get experienced and move beyond being a simple gumshoe, existential risk analysts need to view simulations and thought experiments of planetary destruction before they can consider it without flinching. Because it is impossible to acquire experience of human extinction risk, as by definition no one is alive afterwards, we have to settle for simulations.

The reaction of many educated adults to extinction risk discussions reminds me of the reaction kids in my Middle School health classes had to the mention of the word “penis”: adolescent giggling. If I were to get onstage in front of a random audience and start talking about existential risk when they didn’t expect it, using words like “planetary destruction”, they’d probably start giggling, at least in their minds. Obviously, we have a way to mature as a society until we can look calmly at the prospect of our own demise. By resolving to do so yourself, you can be a part of the solution instead of the problem.

Last week a blogger for the Houston Chronicle, Eric Berger, covered my post on immortality and extinction risk, and the immaturity of most of the comments received is expected but also telling. One reader writes that we should hire Will Smith to save the world, another writes: “I don’t worry about this sort of thing, because when it happens, I’ll be dead and won’t care.” Just like how you get to see someone’s true self a little better when they’re a tad tipsy, we get to see what people really think of extinction risk analysis by their anonymous comments on a big website. When people are on the record, they aren’t likely to make pithy comments like those on the blog, but they might be thinking them, and what they say in public is likely to be a dressed-up version of these sentiments. For instance, there’s an article that appeared in The Mercury on the 22nd of April in 2003, “Disastronomer Royal: More Apocalyptic then the Pope”, which exemplifies the reaction to those who take the prospect of extinction risk seriously, referring to Martin Rees in this case. Extinction denialist articles are not hard to find on the Internet: just Google them.

Ideally, existential risk analysis should be getting hundreds of millions of dollars in funding, as the study of global warming does today. Until there are planetary immune systems in place that can respond so quickly and comprehensively that the likelihood of terminal disaster is reduced to practically nothing, existential risk mitigation should be the number one priority of the human species. And the first step is for individuals, such as yourself, to look at the prospect of human extinction in a serious way.

There are dozens of published existential risks; there are undoubtedly many more that Nick Bostrom did not think of in his paper on the subject. Ideally, the Lifeboat Foundation and other organizations would identify each of these risks and take action to combat them all, but this simply isn’t realistic. We have a finite budget and a finite number of man-hours to spend on the problem, and our resources aren’t even particularly large compared with other non-profit organizations. If Lifeboat or other organizations are going to take serious action against existential risk, we need to identify the areas where we can do the most good, even at the expense of ignoring other risks. Humans like to totally eliminate risks, but this is a cognitive bias; it does not correspond to the most effective strategy. In general, when assessing existential risks, there are a number of useful heuristics:

- Any risk which has become widely known, or an issue in contemporary politics, will probably be very hard to deal with. Thus, even if it is a legitimate risk, it may be worth putting on the back burner; there’s no point in spending millions of dollars for little gain.

- Any risk which is totally natural (could happen without human intervention), must be highly improbable, as we know we have been on this planet for a hundred thousand years without getting killed off. To estimate the probability of these risks, use Laplace’s Law of Succession.

- Risks which we cannot affect the probability of can be safely ignored. It does us little good to know that there is a 1% chance of doom next Thursday, if we can’t do anything about it.

Some specific risks which can be safely ignored:

- Particle accelerator accidents. We don’t yet know enough high-energy physics to say conclusively that a particle accelerator could never create a true vacuum, stable strangelet, or another universe-destroying particle. Luckily, we don’t have to; cosmic rays have been bombarding us for the past four billion years, with energies a million times higher than anything we can create in an accelerator. If it were possible to annihilate the planet with a high-energy particle collision, it would have happened already.

- The simulation gets shut down. The idea that “the universe is a simulation” is equally good at explaining every outcome- no matter what happens in the universe, you can concoct some reason why the simulators would engineer it. Which specific actions would make the universe safer from being shut down? We have no clue, and barring a revelation from On High, we have no way to find out. If we do try and take action to stop the universe from being shut down, it could just as easily make the risk worse.

- A long list of natural scenarios. To quote Nick Bostrom: “solar flares, supernovae, black hole explosions or mergers, gamma-ray bursts, galactic center outbursts, supervolcanos, loss of biodiversity, buildup of air pollution, gradual loss of human fertility, and various religious doomsday scenarios.” We can’t prevent most of these anyway, even if they were serious risks.

Some specific risks which should be given lower priority:

- Asteroid impact. This is a serious risk, but it still has a fairly low probability, on the order of one in 105 to 107 for something that would threaten the human species within the next century or so. Mitigation is also likely to be quite expensive compared to other risks.

- Global climate change. While this is fairly probable, the impact of it isn’t likely to be severe enough to qualify as an existential risk. The IPCC Fourth Assessement Report has concluded that it is “very likely” that there will be more heat waves and heavy rainfall events, while it is “likely” that there will be more droughts, hurricanes, and extreme high tides; these do not qualify as existential risks, or even anything particularly serious. We know from past temperature data that the Earth can warm by 6–9 C on a fairly short timescale, without causing a permanent collapse or even a mass extinction. Additionally, climate change has become a political problem, making it next to impossible to implement serious measures without a massive effort.

- Nuclear war is a special case, because although we can’t do much to prevent it, we can take action to prepare for it in case it does happen. We don’t even have to think about the best ways to prepare; there are already published, reviewed books detailing what can be done to seek safety in the event of a nuclear catastrophe. I firmly believe that every transhumanist organization should have a contingency plan in the event of nuclear war, economic depression, a conventional WWIII or another political disaster. This planet is too important to let it get blown up because the people saving it were “collateral damage”.

- Terrorism. It may be the bogeyman-of-the-decade, but terrorists are not going to deliberately destroy the Earth; terrorism is a political tool with political goals that require someone to be alive. While terrorists might do something stupid which results in an existential risk, “terrorism” isn’t a special case that we need to separately plan for; a virus, nanoreplicator or UFAI is just as deadly regardless of where it comes from.

Recently, our international spokesperson, Philippe Van Nedervelde, spoke to the deputy editor of Betterhumans, Parish Mozdzierz, on the Lifeboat Foundation, its goals and activities. Here is the first question:

Betterhumans: How did the formation of the Lifeboat Foundation come about?

Philippe Van Nedervelde: Lifeboat Foundation’s founder, Eric Klien, was shaken wide awake by 9/11. The new reality of what we call (exponentially accelerating) “Asymmetric Destructive Capability” (ADC) fully hit him: ever smaller groups of people can create ever more enormous amounts of damage. And all of this thanks to advances in technology. As a bracelet-wearing cryonicist, he knew of the potentials of nanotechnology (having attended MIT Nanotechnology Group meetings in the late 1980s), and that 9/11 was just a taste of things to come. Accordingly, the Lifeboat Foundation was incorporated within months of 9/11.

Read the whole thing here.

(via IsraGood)

With many governments pursuing new ways to power their cities via green energy, it looks like they soon may have another option to add to their list.

While most people think of cows as a “unconverted” forms of lunch and dinner, these harmless beasts may be able to energize our communities through the smelly presents that they often leave behind.

(Globes Online) GES said that the Hefer Valley plant is the first large-scale plant of its kind in Israel, and one of the first in the world. The plant utilizes 600 tons of manure a day. The manure is sterilized, and the solid and liquid waste are then processed to produce methane, which drives the generators to make electricity.

Granite Hacarmel CEO Amiaz Sagis said, “This is unquestionably an important milestone. This facility fits in with Granite Hacarmel’s strategy to invest in infrastructures and ecology. The company is also investing resources to develop alternative energy, water treatment, and desalination.”

Ironically this whole development began when the Hefer Valley Cooperative Society was ordered to find a solution towards reducing the pollution being produced by 12,000 cows. It seems that after some head scratching, this power plant was built, allowing the community to not only reduce pollution, but find a unique way at keeping the lights on.

Although many nations would benefit from turning cow manure (or sheep, horse, camel, etc.), there probably is not enough of this stuff to both fuel our planet and make our gardens grow. However, if researchers found a way to turn “human manure” into energy, we could ultimately find a renewable energy source that can keep up with our ever growing population.

NASA’s Marshall Space Flight Center has designed a nuclear-warhead-carrying spacecraft, that would be boosted by the US agency’s proposed Ares V cargo launch vehicle, to deflect asteroids.

The Ares V launch vehicle is scheduled to first fly in 2018. It would launch 130 tons to LEO.

I welcome this study for providing a clearer analysis of the deflection options and the analyzing costs of searching for threatening asteroids.

The 8.9m (29ft)-long “Cradle” spacecraft would carry six 1,500kg (3,300lb) missile-like interceptor vehicles that would carry one 1.2MT B83 nuclear warhead each, with a total mass of 11,035kg.

99942 Apophis is a near-Earth asteroid that caused a brief period of concern in December 2004 because initial observations indicated a relatively large probability that it would strike the Earth in 2029. It is 350 meters across and weighs about 46 million tons.

The study team assessed a series of approaches that could be used to divert a NEO potentially on a collision course with Earth. Nuclear explosives, as well as non-nuclear options, were assessed.
• Nuclear standoff explosions are assessed to be 10–100 times more effective than the non-nuclear alternatives analyzed in this study. Other techniques involving the surface or subsurface use of nuclear explosives may be more efficient, but they run an increased risk of fracturing the target NEO. They also carry higher development and operations risks.
• Non-nuclear kinetic impactors are the most mature approach and could be used in some deflection/mitigation scenarios, especially for NEOs that consist of a single small, solid body.
• “Slow push” mitigation techniques are the most expensive, have the lowest level of technical readiness, and their ability to both travel to and divert a threatening NEO would be limited unless mission durations of many years to decades are possible.
• 30–80 percent of potentially hazardous NEOs are in orbits that are beyond the capability of current or planned launch systems. Therefore, planetary gravity assist swingby trajectories or on-orbit assembly of modular propulsion systems may be needed to augment launch vehicle performance, if these objects need to be deflected.


This diagram shows that the nuclear options work better and can handle asteroids up to 950 meters in size


This is a table that shows that a performance index of 1 means a method was good enough to perform a successful deflection. Less than 1 means more launches are needed.


This is a drawing of the deflection vehicle

The Lifeboat foundation has the asteroid shield program

Increasingly, tools readily available on the Internet enable independent specialists or even members of the general public to do intelligence work that used to be the monopoly of agencies like the CIA, KGB, or MI6. Playing the role of an armchair James Bond, Hans K. Kristensen, a nuclear weapons specialist at the Federation of American Scientists (FAS) in Washington, D.C., recently drew attention to images on Google Earth of Chinese sites. Kristensen believes that the pictures shed light on China’s deployment of its second-generation of nuclear weapons systems: one appears to be a new ballistic missile submarine [see above image]; others may capture the replacement of liquid-fueled rockets with solid-fuel rockets at sites in north-central China, within range of ICBM fields in southern Russia.

Source: IEEE Spectrum. An excellent example of how open source intelligence outsmart military intelligence.

See also: Nuclear terrorism: the new day after from the Bulletin of Atomic Scientists. From the article:

Finally, there is the question of whether the U.S. government would behave with rational restraint. This, of course, assumes that there is a government. A terrorist nuclear attack on Washington could easily kill the president, vice president, much of Congress and the Supreme Court. But in a July 12 Washington Post op-ed, Norman Ornstein revealed that the federal government has refused to make contingency plans for its own nuclear decapitation, which means that U.S. nuclear weapons could be in the hands of small, enraged launch control teams with no clear line of authority above them. Assuming that the federal government was still there, however, we can only imagine (using the reaction to the loss of a mere two buildings on 9/11 as a metric of comparison) the public rage at the loss of a city and the intense, perhaps irresistible, pressure on the president to make someone, somewhere pay for this atrocity.