Toggle light / dark theme

Following is a discussion of two potential threats to humanity – one which has existed for eons, the second we have seen recently resurfacing having thought it had been laid to rest.

First, a recent story on PhysOrg describes the work researchers at Vanderbilt University have performed in isolating antibodies from elderly people who had survived the 1918 flu pandemic. This comes three years after researchers at Mount Sinai and the Armed Forces Institute of Pathology in Washington, D.C isolated the same virus which caused this outbreak from the frozen bodies of people in Alaska who had died in the pandemic.

In addition to being an impressive achievement of biomedical science, which involved isolating antibody-secreting B cells from donors and generating “immortalized” cell lines to produce large amounts of antibodies, this research also demonstrates the amazing memory the immune system has (90 years!), as well as the ability scientists have to use tissue samples from people born nearly a century ago and fashion them into a potential weapon against future similar outbreaks. Indeed, these manufactured antibodies proved effective against 1918 flu virus when tested in mice.

Furthermore, such research provides tools which could help generate antibodies to treat other viruses which still blight humanity (such as HIV) or are seen as potential threats, such as avian influenza.

http://www.physorg.com/news138198336.html

Second, nuclear annihilation. Russia’s recent foray into Georgia and the ensuing tensions with the west have brought the specter of the cold war back from the dead, and with it increasing levels of aggressive rhetoric from both sides and more or less veiled threats of action, some of it diplomatic, some military.

During the past twenty years, ever since the fall of the former Soviet Union, we have become used to living in a world no longer directly and overtly threatened by complete annihilation through world war III. Is this about to change? It would seem that despite current tensions, present conditions are far from fostering a renewed cold war.

Modern day Russia (and China can be described along similar lines) is inexorably tied to the world economy and does not represent a conflicting ideology striving for world domination as was the case during the most of the latter half of the twentieth century. This deep international integration stems from the almost global acceptance of the market economy as the preferred driving force for economic growth, albeit under different forms of government. Both Russia and China are (currently) fueled more by the will being recognized as premier global forces rather than the will to rule the world, the former wishing to return to its previous position and reclaim the respect it feels it lost during the last couple of decades, and the latter rising anew after centuries in the shadows.

Of course, the coming elections in the US may change the tone prevalent in the international brinkmanship game, although the involvement of the EU, led by French premier Sarkozy means that such strong statements coming from Western Europe are not set to change fundamentally.

So, unless further surprises are in store for us (a possibility which cannot be ignored when dealing with political and military maneuvering, especially those involving the tense conditions prevalent in many of the former Soviet republics), a compromise will eventually be reached and respected. The seeds of a calming effort have already been felt in recent days with much less inflammatory declarations from both sides, and signs of a Russian willingness to tone down at least the public face of disagreements with the EU and US. This is likely set to continue…at least until the next outbreak of nationalistic violence or political sword-brandishing in a region in which tensions run high.

An interesting analysis of the current situation can be found at: http://www.cnn.com/2008/WORLD/europe/08/29/oakley.eu.russia/

Researchers have devised a rapid and efficient method for generating protein sentinels of the immune system, called monoclonal antibodies, which mark and neutralize foreign invaders.

For both ethical and practical reasons, monoclonals are usually made in mice. And that’s a problem, because the human immune system recognizes the mouse proteins as foreign and sometimes attacks them instead. The result can be an allergic reaction, and sometimes even death.

To get around that problem, researchers now “humanize” the antibodies, replacing some or all of mouse-derived pieces with human ones.

Wilson and Ahmed were interested in the immune response to vaccination. Conventional wisdom held that the B-cell response would be dominated by “memory” B cells. But as the study authors monitored individuals vaccinated against influenza, they found that a different population of B cells peaked about one week after vaccination, and then disappeared, before the memory cells kicked in. This population of cells, called antibody-secreting plasma cells (ASCs), is highly enriched for cells that target the vaccine, with vaccine-specific cells accounting for nearly 70 percent of all ASCs.

“That’s the trick,” said Wilson. “So instead of one cell in 1,000 binding to the vaccines, now it is seven in 10 cells.”

All of a sudden, the researchers had access to a highly enriched pool of antibody-secreting cells, something that is relatively easy to produce in mice, but hard to come by for human B cells.

To ramp up the production and cloning of these antibodies, the researchers added a second twist. Mouse monoclonal antibodies are traditionally produced in the lab from hybridomas, which are cell lines made by fusing the antibody-producing cell with a cancer cell. But human cells don’t respond well to this treatment. So Wilson and his colleagues isolated the ASC antibody genes and transferred them into an “immortalized” cell line. The result was the generation of more than 100 different monoclonals in less than a year, with each taking just a few weeks to produce.

In the event of an emerging flu pandemic, for instance, this approach could lead to faster production of human monoclonals to both diagnose and protect against the disease.

Journal Nature article: Rapid cloning of high-affinity human monoclonal antibodies against influenza virus

Nature 453, 667–671 (29 May 2008) | doi:10.1038/nature06890; Received 16 October 2007; Accepted 4 March 2008; Published online 30 April 2008

Pre-existing neutralizing antibody provides the first line of defence against pathogens in general. For influenza virus, annual vaccinations are given to maintain protective levels of antibody against the currently circulating strains. Here we report that after booster vaccination there was a rapid and robust influenza-specific IgG+ antibody-secreting plasma cell (ASC) response that peaked at approximately day 7 and accounted for up to 6% of peripheral blood B cells. These ASCs could be distinguished from influenza-specific IgG+ memory B cells that peaked 14–21 days after vaccination and averaged 1% of all B cells. Importantly, as much as 80% of ASCs purified at the peak of the response were influenza specific. This ASC response was characterized by a highly restricted B-cell receptor (BCR) repertoire that in some donors was dominated by only a few B-cell clones. This pauci-clonal response, however, showed extensive intraclonal diversification from accumulated somatic mutations. We used the immunoglobulin variable regions isolated from sorted single ASCs to produce over 50 human monoclonal antibodies (mAbs) that bound to the three influenza vaccine strains with high affinity. This strategy demonstrates that we can generate multiple high-affinity mAbs from humans within a month after vaccination. The panel of influenza-virus-specific human mAbs allowed us to address the issue of original antigenic sin (OAS): the phenomenon where the induced antibody shows higher affinity to a previously encountered influenza virus strain compared with the virus strain present in the vaccine1. However, we found that most of the influenza-virus-specific mAbs showed the highest affinity for the current vaccine strain. Thus, OAS does not seem to be a common occurrence in normal, healthy adults receiving influenza vaccination.

What is metabolomics?

Genes are similar to the plans for a house; they show what it looks like, but not what people are getting up to inside. One way of getting a snapshot of their lives would be to rummage through their rubbish, and that is pretty much what metabolomics does. […]

Metabolomics studies metabolites, the by-products of the hundreds of thousands of chemical reactions that continuously go on in every cell of the human body. Because blood and urine are packed with these compounds, it should be possible to detect and analyse them. If, say, a tumour was growing somewhere then, long before any existing methods can detect it, the combination of metabolites from the dividing cancer cells will produce a new pattern, different from that seen in healthy tissue. Such metabolic changes could be picked up by computer programs, adapted from those credit-card companies use to detect crime by spotting sudden and unusual spending patterns amid millions of ordinary transactions.

This could be used for traditional medicine, both to prevent pathologies and to detect those that are already present so they can be treated. But another use would be as part of an early-detection system to defend against pandemics and biological attacks. As mentioned previously, network-theory can help us better use vaccines. But once you have a cure or antidote, you also need to identify people that are already infected but haven’t died yet, and the earlier you can do that after the infection, the more chances they have to live.

Once the techniques of metabolomics are sufficiently advanced and inexpensive to use, they might provide better data than simply relying on reported symptoms (might be too late by then), and might scale better than traditional detection methods (not sure yet — something else might make more economic sense). It’s a bit too early to tell, but it’s a very promising field.

For more information, see Douglas Kell’s site or Wikipedia: Metabolomics.

Source: The Economist. See also Lifeboat’s BioShield program.

If a pandemic strikes and hundreds of millions are at risk, we won’t have enough vaccines for everybody, at least not within the time window where vaccines would help. But a new strategy could help use the vaccines we have more effectively:

Researchers are now proposing a new strategy for targeting shots that could, at least in theory, stop a pandemic from spreading along the network of social interactions. Vaccinating selected people is essentially equivalent to cutting out nodes of the social network. As far as the pandemic is concerned, it’s as if those people no longer exist. The team’s idea is to single out people so that immunizing them breaks up the network into smaller parts of roughly equal sizes. Computer simulations show that this strategy could block a pandemic using 5 to 50 percent fewer doses than existing strategies, the researchers write in an upcoming Physical Review Letters.

vaccine-targeting.jpg

So you break up the general social network into sub-networks, and then you target the most important nodes of these sub-networks and so on until you run out of vaccines. The challenge will be to get good information about social networks, something not quite as easy as mapping computer networks, but there is progress on that front.

In one of the most dramatic illustrations of their technique, the researchers simulated the spread of a pandemic using data from a Swedish study of social connections, in which more than 310,000 people are represented and connected based on whether they live in the same household or they work in the same place. With the new method, the epidemic spread to about 4 percent of the population, compared to nearly 40 percent for more standard strategies, the team reports.

Source: ScienceNews. See also Lifeboat’s BioShield program.

There is a strong overlap between those concerned about extinction risk prevention and healthy life extension. Accordingly, many supporters of the Lifeboat Foundation will be attending an open event on regenerative medicine taking place on the UCLA campus on the 27th of June. Here is the blurb:

On Friday, June 27th, leading scientists and thinkers in stem cell research and regenerative medicine will gather in Los Angeles at UCLA for Aging 2008 to explain how their work can combat human aging, and the sociological implications of developing rejuvenation therapies. Aging 2008 is free, with advance registration required.

UCLA Royce Hall
Friday June 27th | Doors open 4pm
405 Hilgard Ave, Los Angeles, CA 90024

This special public event is being organized by the Methuselah Foundation. Dr. Aubrey de Grey, chairman and chief science officer of the Methuselah Foundation, said, “Our organization has raised over $10 million to crack open the logjams in longevity science. With the two-armed strategy of direct investments into key research projects, and a competitive prize to spur on competing scientists’ race to break rejuvenation and longevity records in lab mice, the Foundation is actively accelerating the drive toward a future free of age-related degeneration.” The Methuselah Foundation has been covered by “60 Minutes,” Popular Science, the Wall Street Journal, and other top-flight media outlets.

If any Lifeboat Foundation supporters are interesting in meeting up before or after the event, comment on this post.

The report, “Synthetic Biology: Social and Ethical Challenges”, highlights concerns about ownership, misuse, unintended consequences, and accidental release of synthetic organisms into the environment.

Andrew Balmer and Professor Paul Martin, the report’s authors, suggest a threat from “garage biology”, with people experimenting at home. They also emphasise that there is no policy on the impact of synthetic biology on international bioweapons conventions.

Read the entire report here (PDF).

Cross posted from Next big future by Brian Wang, Lifeboat foundation director of Research

I am presenting disruption events for humans and also for biospheres and planets and where I can correlating them with historical frequency and scale.

There has been previous work on categorizing and classifying extinction events. There is Bostroms paper and there is also the work by Jamais Cascio and Michael Anissimov on classification and identifying risks (presented below).

A recent article discusses the inevtiable “end of societies” (it refers to civilizations but it seems to be referring more to things like the end of the roman empire, which still ends up later with Italy, Austria Hungary etc… emerging)

The theories around complexity seem me that to be that core developments along connected S curves of technology and societal processes cap out (around key areas of energy, transportation, governing efficiency, agriculture, production) and then a society falls back (soft or hard dark age, reconstitutes and starts back up again).

Here is a wider range of disruption. Which can also be correlated to frequency that they have occurred historically.

High growth drop to Low growth (short business cycles, every few years)
Recession (soft or deep) Every five to fifteen years.
Depressions (50−100 years, can be more frequent)

List of recessions for the USA (includes depressions)

Differences recession/depression

Good rule of thumb for determining the difference between a recession and a depression is to look at the changes in GNP. A depression is any economic downturn where real GDP declines by more than 10 percent. A recession is an economic downturn that is less severe. By this yardstick, the last depression in the United States was from May 1937 to June 1938, where real GDP declined by 18.2 percent. Great Depression of the 1930s can be seen as two separate events: an incredibly severe depression lasting from August 1929 to March 1933 where real GDP declined by almost 33 percent, a period of recovery, then another less severe depression of 1937–38. (Depressions every 50–100 years. Were more frequent in the past).

Dark age (period of societal collapse, soft/light or regular)
I would say the difference between a long recession and a dark age has to do with breakdown of societal order and some level of population decline / dieback, loss of knowledge/education breakdown. (Once per thousand years.)

I would say that a soft dark age is also something like what China had from the 1400’s to 1970.
Basically a series of really bad societal choices. Maybe something between depressions and dark age or something that does not categorize as neatly but an underperformance by twenty times versus competing groups. Perhaps there should be some kind of societal disorder, levels and categories of major society wide screw ups — historic level mistakes. The Chinese experience I think was triggered by the renunciation of the ocean going fleet, outside ideas and tech, and a lot of other follow on screw ups.

Plagues played a part in weakening the Roman and Han empires.

Societal collapse talk which includes Toynbee analysis.

Toynbee argues that the breakdown of civilizations is not caused by loss of control over the environment, over the human environment, or attacks from outside. Rather, it comes from the deterioration of the “Creative Minority,” which eventually ceases to be creative and degenerates into merely a “Dominant Minority” (who forces the majority to obey without meriting obedience). He argues that creative minorities deteriorate due to a worship of their “former self,” by which they become prideful, and fail to adequately address the next challenge they face.

My take is that the Enlightenment would strengthened with a larger creative majority, where everyone has a stake and capability to creatively advance society. I have an article about who the elite are now.

Many now argue about how dark the dark ages were not as completely bad as commonly believed.
The dark ages is also called the Middle Ages

Population during the middle ages

Between dark age/social collapse and extinction. There are levels of decimation/devastation. (use orders of magnitude 90+%, 99%, 99.9%, 99.99%)

Level 1 decimation = 90% population loss
Level 2 decimation = 99% population loss
Level 3 decimation = 99.9% population loss

Level 9 population loss (would pretty much be extinction for current human civilization). Only 6–7 people left or less which would not be a viable population.

Can be regional or global, some number of species (for decimation)

Categorizations of Extinctions, end of world categories

Can be regional or global, some number of species (for extinctions)

== The Mass extinction events have occurred in the past (to other species. For each species there can only be one extinction event). Dinosaurs, and many others.

Unfortunately Michael’s accelerating future blog is having some issues so here is a cached link.

Michael was identifying manmade risks
The Easier-to-Explain Existential Risks (remember an existential risk
is something that can set humanity way back, not necessarily killing
everyone):

1. neoviruses
2. neobacteria
3. cybernetic biota
4. Drexlerian nanoweapons

The hardest to explain is probably #4. My proposal here is that, if
someone has never heard of the concept of existential risk, it’s
easier to focus on these first four before even daring to mention the
latter ones. But here they are anyway:

5. runaway self-replicating machines (“grey goo” not recommended
because this is too narrow of a term)
6. destructive takeoff initiated by intelligence-amplified human
7. destructive takeoff initiated by mind upload
8. destructive takeoff initiated by artificial intelligence

Another classification scheme: the eschatological taxonomy by Jamais
Cascio on Open the Future. His classification scheme has seven
categories, one with two sub-categories. These are:

0:Regional Catastrophe (examples: moderate-case global warming,
minor asteroid impact, local thermonuclear war)
1: Human Die-Back (examples: extreme-case global warming,
moderate asteroid impact, global thermonuclear war)
2: Civilization Extinction (examples: worst-case global warming,
significant asteroid impact, early-era molecular nanotech warfare)
3a: Human Extinction-Engineered (examples: targeted nano-plague,
engineered sterility absent radical life extension)
3b: Human Extinction-Natural (examples: major asteroid impact,
methane clathrates melt)
4: Biosphere Extinction (examples: massive asteroid impact,
“iceball Earth” reemergence, late-era molecular nanotech warfare)
5: Planetary Extinction (examples: dwarf-planet-scale asteroid
impact, nearby gamma-ray burst)
X: Planetary Elimination (example: post-Singularity beings
disassemble planet to make computronium)

A couple of interesting posts about historical threats to civilization and life by Howard Bloom.

Natural climate shifts and from space (not asteroids but interstellar gases).

Humans are not the most successful life, bacteria is the most successful. Bacteria has survived for 3.85 billion years. Humans for 100,000 years. All other kinds of life lasted no more than 160 million years. [Other species have only managed to hang in there for anywhere from 1.6 million years to 160 million. We humans are one of the shortest-lived natural experiments around. We’ve been here in one form or another for a paltry two and a half million years.] If your numbers are not big enough and you are not diverse enough then something in nature eventually wipes you out.

Following the bacteria survival model could mean using transhumanism as a survival strategy. Creating more diversity to allow for better survival. Humans adapted to living under the sea, deep in the earth, in various niches in space, more radiation resistance,non-biological forms etc… It would also mean spreading into space (panspermia). Individually using technology we could become very successful at life extension, but it will take more than that for a good plan for human (civilization, society, species) long term survival planning.

Other periodic challenges:
142 mass extinctions, 80 glaciations in the last two million years, a planet that may have once been a frozen iceball, and a klatch of global warmings in which the temperature has soared by 18 degrees in ten years or less.

In the last 120,000 years there were 20 interludes in which the temperature of the planet shot up 10 to 18 degrees within a decade. Until just 10,000 years ago, the Gulf Stream shifted its route every 1,500 years or so. This would melt mega-islands of ice, put out our coastal cities beneath the surface of the sea, and strip our farmlands of the conditions they need to produce the food that feeds us.

The solar system has a 240-million-year-long-orbit around the center of our galaxy, an orbit that takes us through interstellar gas clusters called local fluff, interstellar clusters that strip our planet of its protective heliosphere, interstellar clusters that bombard the earth with cosmic radiation and interstellar clusters that trigger giant climate change.

IF civilisation is wiped out on Earth, salvation may come from space. Plans are being drawn up for a “Doomsday ark” on the moon containing the essentials of life and civilisation, to be activated in the event of earth being devastated by a giant asteroid or nuclear war.

Construction of a lunar information bank, discussed at a conference in Strasbourg last month, would provide survivors on Earth with a remote-access toolkit to rebuild the human race.

A basic version of the ark would contain hard discs holding information such as DNA sequences and instructions for metal smelting or planting crops. It would be buried in a vault just under the lunar surface and transmitters would send the data to heavily protected receivers on earth. If no receivers survived, the ark would continue transmitting the information until new ones could be built.

The vault could later be extended to include natural material including microbes, animal embryos and plant seeds and even cultural relics such as surplus items from museum stores.

As a first step to discovering whether living organisms could survive, European Space Agency scientists are hoping to experiment with growing tulips on the moon within the next decade.

According to Bernard Foing, chief scientist at the agency’s research department, the first flowers — tulips or arabidopsis, a plant widely used in research — could be grown in 2012 or 2015.

“Eventually, it will be necessary to have a kind of Noah’s ark there, a diversity of species from the biosphere,” said Foing.

Tulips are ideal because they can be frozen, transported long distances and grown with little nourishment. Combined with algae, an enclosed artificial atmosphere and chemically enhanced lunar soil, they could form the basis of an ecosystem.

Read the entire article at Times Online. See also “‘Lunar Ark’ Proposed in Case of Deadly Impact on Earth” on National Geographic.

The Economist has a piece on the Global Viral Forecasting Initiative (GVFI):

Dr [Nathan] Wolfe, [a virologist at the University of California, Los Angeles], is attempting to create what he calls the Global Viral Forecasting Initiative (GVFI). This is still a pilot project, with only half a dozen sites in Africa and Asia. But he hopes, if he can raise the $50m he needs, to build it into a planet-wide network that can forecast epidemics before they happen, and thus let people prepare their defences well in advance. […]

The next stage of the project is to try to gather as complete an inventory as possible of animal viruses, and Dr Wolfe has enlisted his hunters to take blood samples from whatever they catch. He is collaborating with Eric Delwart and Joe DeRisi of the University of California, San Francisco, to screen this blood for unknown viral genes that indicate new species. The GVFI will also look at people, monitoring symptoms of ill health of unknown cause and trying to match these with unusual viruses.

More here. See also the Lifeboat Foundation’s BioShield program.

The New York Times reports that Jeffrey Martin and William L. Kubic Jr., two scientists from Los Alamos National Laboratories are proposing a process by which the carbon dioxide — the primary greenhouse gas considered responsible for contributing to global warming — emitted from cars and other polluters would be captured and converted to gasoline, methane and jet fuel.

The bold proposal, which the duo have named “Green Freedom” would create a closed cycle from the emission of greenhouse gasses resulting in the creation of a vast source of renewable energy where today we have an open ended cycle that is considered a global threat.

They say the science is relatively simple and straight forward. Polluted air would be blown over potassium carbonate which would sequester the CO2, a chemical process would then remove the trapped CO2 and via a number of established chemical processes it would then be converted to various types of fuel.

Although the process has not been demonstrated and no prototypes have been built the pair claims that the required steps or other chemical processes that they say are close cousins to those required are already in use. In addition, none of the processes violate any known laws of physics and a number of other top researchers have independently made similar suggestions for the sequestration and reuse of emitted CO2.

This concept is not without its share of controversy and detractors however. With claims of everything from the fact that the economics of the process make it unfeasible to concerns that it will encourage further over–population and sprawl not to mention the worry that proliferation of nuclear power brings with it, it is nevertheless an interesting concept and proves — if nothing else — that through continued investment in breakthrough technologies we can overcome all challenges be they environmental or societal.