Toggle light / dark theme

Determining the structure of a protein called hemagglutinin on the surface of influenza B is giving researchers at Baylor College of Medicine and Rice University in Houston clues as to what kinds of mutations could spark the next flu pandemic.

This is interesting research and progress in understanding and possibly blocking changes that would lead to pandemics.

In a report that goes online today in the Proceedings of the National Academy of Sciences (PNAS), Drs. Qinghua Wang, assistant professor of biochemistry and molecular biology at BCM, and Jianpeng Ma, associate professor in the same department and their colleagues describe the actual structure of influenza B virus hemagglutinin and compare it to a similar protein on influenza A virus. That comparison may be key to understanding the changes that will have to occur before avian flu (which is a form of influenza A virus) mutates to a form that can easily infect humans, said Ma, who holds a joint appointment at Rice. He and Wang have identified a particular residue or portion of the protein that may play a role in how different types of hemagglutinin bind to human cells.

“What would it take for the bird flu to mutate and start killing people” That’s the next part of our work,” said Ma. Understanding that change may give scientists a handle on how to stymie it.

There are two main forms of influenza virus – A and B. Influenza B virus infects only people while influenza A infects people and birds. In the past, influenza A has been the source of major worldwide epidemics (called pandemics) of flu that have swept the globe, killing millions of people. The most famous of these was the Pandemic of 1918–1919, which is believed to have killed between 20 and 40 million people worldwide. It killed more people than World War I, which directly preceded it.

The Asian flu pandemic of 1957–1958 is believed to have killed as many as 1.5 million people worldwide, and the so-called Hong Kong flu pandemic of 1968–1969 is credited with as many as 1 million deaths. Each scourge was accompanied by a major change in the proteins on the surface of the virus.

The Lifeboat Foundation has the bioshield project

New Scientist reports on a new study by researchers led by Massimiliano Vasile of the University of Glasgow in Scotland have compared nine of the many methods proposed to ward off such objects, including blasting them with nuclear explosions.

The team assessed the methods according to three performance criteria: the amount of change each method would make to the asteroid’s orbit, the amount of warning time needed and the mass of the spacecraft needed for the mission.

The method that came out on top was a swarm of mirror-carrying spacecraft. The spacecraft would be launched from Earth to hover near the asteroid and concentrate sunlight onto a point on the asteroid’s surface.

In this way, they would heat the asteroid’s surface to more than 2100° C, enough to start vaporising it. As the gases spewed from the asteroid, they would create a small thrust in the opposite direction, altering the asteroid’s orbit.

The scientists found that 10 of these spacecraft, each bearing a 20-metre-wide inflatable mirror, could deflect a 150-metre asteroid in about six months. With 100 spacecraft, it would take just a few days, once the spacecraft are in position.

To deflect a 20-kilometre asteroid, about the size of the one that wiped out the dinosaurs, it would take the combined work of 5000 mirror spacecraft focusing sunlight on the asteroid for three or more years.

But Clark Chapman of the Southwest Research Institute in Boulder, Colorado, US, says ranking the options based on what gives the largest nudge and takes the least time is wrongheaded.

The proper way to go about ranking this “is to give weight to adequate means to divert an NEO of the most likely sizes we expect to encounter, and to do so in a controllable and safe manner”, Chapman told New Scientist.

The best approach may be to ram the asteroid with a spacecraft to provide most of the change needed, then follow up with a gravity tractor to make any small adjustments needed, he says.

It is good to have several options for deflection and a survey to detect the specific risks of near earth objects.

When I read about the “Aurora Generator Test” video that has been leaked to the media I wondered “why leak it now now and who benefits.” Like many of you, I question the reasons behind any leak from an “unnamed source” inside the US Federal government to the media. Hopefully we’ll all benefit from this particular leak.

Then I thought back to a conversation I had at a trade show booth I was working in several years ago. I was speaking with a fellow from the power generation industry. He indicated that he was very worried about the security ramifications of a hardware refresh of the SCADA systems that his utility was using to control its power generation equipment. The legacy UNIX-based SCADA systems were going to be replaced by Windows based systems. He was even more very worried that the “air gaps” that historically have been used to physically separate the SCADA control networks from power company’s regular data networks might be removed to cut costs.

Thankfully on July 19, 2007 the Federal Energy Regulatory Commission proposed to the North American Electric Reliability Corporation a set of new, and much overdue, cyber security standards that will, once adopted and enforced do a lot to help make an attacker’s job a lot harder. Thank God, the people who operate the most critically important part of our national infrastructure have noticed the obvious.

Hopefully a little sunlight will help accelerate the process of reducing the attack surface of North America’s power grid.

After all, the march to the Singularity will go a lot slower without a reliable power grid.

Matt McGuirl, CISSP

A new biosensor developed at the Georgia Tech Research Institute (GTRI) can detect avian influenza in just minutes. In addition to being a rapid test, the biosensor is economical, field-deployable, sensitive to different viral strains and requires no labels or reagents.

This kind of technology could be applied to real time monitoring of other diseases as well.


Photograph of the optical biosensor that is approximately 16 millimeters by 33 millimeters in size. The horizontal purple lines are the channels on the waveguide. Credit: Gary Meek

“We can do real-time monitoring of avian influenza infections on the farm, in live-bird markets or in poultry processing facilities,” said Jie Xu, a research scientist in GTRI’s Electro-Optical Systems Laboratory (EOSL)

The biosensor is coated with antibodies specifically designed to capture a protein located on the surface of the viral particle. For this study, the researchers evaluated the sensitivity of three unique antibodies to detect avian influenza virus.

The sensor utilizes the interference of light waves, a concept called interferometry, to precisely determine how many virus particles attach to the sensor’s surface. More specifically, light from a laser diode is coupled into an optical waveguide through a grating and travels under one sensing channel and one reference channel.

Researchers coat the sensing channel with the specific antibodies and coat the reference channel with non-specific antibodies. Having the reference channel minimizes the impact of non-specific interactions, as well as changes in temperature, pH and mechanical motion. Non-specific binding should occur equally to both the test and reference channels and thus not affect the test results.

An electromagnetic field associated with the light beams extends above the waveguides and is very sensitive to the changes caused by antibody-antigen interactions on the waveguide surface. When a liquid sample passes over the waveguides, any binding that occurs on the top of a waveguide because of viral particle attachment causes water molecules to be displaced. This causes a change in the velocity of the light traveling through the waveguide.

There are dozens of published existential risks; there are undoubtedly many more that Nick Bostrom did not think of in his paper on the subject. Ideally, the Lifeboat Foundation and other organizations would identify each of these risks and take action to combat them all, but this simply isn’t realistic. We have a finite budget and a finite number of man-hours to spend on the problem, and our resources aren’t even particularly large compared with other non-profit organizations. If Lifeboat or other organizations are going to take serious action against existential risk, we need to identify the areas where we can do the most good, even at the expense of ignoring other risks. Humans like to totally eliminate risks, but this is a cognitive bias; it does not correspond to the most effective strategy. In general, when assessing existential risks, there are a number of useful heuristics:

- Any risk which has become widely known, or an issue in contemporary politics, will probably be very hard to deal with. Thus, even if it is a legitimate risk, it may be worth putting on the back burner; there’s no point in spending millions of dollars for little gain.

- Any risk which is totally natural (could happen without human intervention), must be highly improbable, as we know we have been on this planet for a hundred thousand years without getting killed off. To estimate the probability of these risks, use Laplace’s Law of Succession.

- Risks which we cannot affect the probability of can be safely ignored. It does us little good to know that there is a 1% chance of doom next Thursday, if we can’t do anything about it.

Some specific risks which can be safely ignored:

- Particle accelerator accidents. We don’t yet know enough high-energy physics to say conclusively that a particle accelerator could never create a true vacuum, stable strangelet, or another universe-destroying particle. Luckily, we don’t have to; cosmic rays have been bombarding us for the past four billion years, with energies a million times higher than anything we can create in an accelerator. If it were possible to annihilate the planet with a high-energy particle collision, it would have happened already.

- The simulation gets shut down. The idea that “the universe is a simulation” is equally good at explaining every outcome- no matter what happens in the universe, you can concoct some reason why the simulators would engineer it. Which specific actions would make the universe safer from being shut down? We have no clue, and barring a revelation from On High, we have no way to find out. If we do try and take action to stop the universe from being shut down, it could just as easily make the risk worse.

- A long list of natural scenarios. To quote Nick Bostrom: “solar flares, supernovae, black hole explosions or mergers, gamma-ray bursts, galactic center outbursts, supervolcanos, loss of biodiversity, buildup of air pollution, gradual loss of human fertility, and various religious doomsday scenarios.” We can’t prevent most of these anyway, even if they were serious risks.

Some specific risks which should be given lower priority:

- Asteroid impact. This is a serious risk, but it still has a fairly low probability, on the order of one in 105 to 107 for something that would threaten the human species within the next century or so. Mitigation is also likely to be quite expensive compared to other risks.

- Global climate change. While this is fairly probable, the impact of it isn’t likely to be severe enough to qualify as an existential risk. The IPCC Fourth Assessement Report has concluded that it is “very likely” that there will be more heat waves and heavy rainfall events, while it is “likely” that there will be more droughts, hurricanes, and extreme high tides; these do not qualify as existential risks, or even anything particularly serious. We know from past temperature data that the Earth can warm by 6–9 C on a fairly short timescale, without causing a permanent collapse or even a mass extinction. Additionally, climate change has become a political problem, making it next to impossible to implement serious measures without a massive effort.

- Nuclear war is a special case, because although we can’t do much to prevent it, we can take action to prepare for it in case it does happen. We don’t even have to think about the best ways to prepare; there are already published, reviewed books detailing what can be done to seek safety in the event of a nuclear catastrophe. I firmly believe that every transhumanist organization should have a contingency plan in the event of nuclear war, economic depression, a conventional WWIII or another political disaster. This planet is too important to let it get blown up because the people saving it were “collateral damage”.

- Terrorism. It may be the bogeyman-of-the-decade, but terrorists are not going to deliberately destroy the Earth; terrorism is a political tool with political goals that require someone to be alive. While terrorists might do something stupid which results in an existential risk, “terrorism” isn’t a special case that we need to separately plan for; a virus, nanoreplicator or UFAI is just as deadly regardless of where it comes from.

Increasingly, tools readily available on the Internet enable independent specialists or even members of the general public to do intelligence work that used to be the monopoly of agencies like the CIA, KGB, or MI6. Playing the role of an armchair James Bond, Hans K. Kristensen, a nuclear weapons specialist at the Federation of American Scientists (FAS) in Washington, D.C., recently drew attention to images on Google Earth of Chinese sites. Kristensen believes that the pictures shed light on China’s deployment of its second-generation of nuclear weapons systems: one appears to be a new ballistic missile submarine [see above image]; others may capture the replacement of liquid-fueled rockets with solid-fuel rockets at sites in north-central China, within range of ICBM fields in southern Russia.

Source: IEEE Spectrum. An excellent example of how open source intelligence outsmart military intelligence.

See also: Nuclear terrorism: the new day after from the Bulletin of Atomic Scientists. From the article:

Finally, there is the question of whether the U.S. government would behave with rational restraint. This, of course, assumes that there is a government. A terrorist nuclear attack on Washington could easily kill the president, vice president, much of Congress and the Supreme Court. But in a July 12 Washington Post op-ed, Norman Ornstein revealed that the federal government has refused to make contingency plans for its own nuclear decapitation, which means that U.S. nuclear weapons could be in the hands of small, enraged launch control teams with no clear line of authority above them. Assuming that the federal government was still there, however, we can only imagine (using the reaction to the loss of a mere two buildings on 9/11 as a metric of comparison) the public rage at the loss of a city and the intense, perhaps irresistible, pressure on the president to make someone, somewhere pay for this atrocity.

The US-led effort to expand the military BMEWS (ballistic missile early warning radar system) to Poland and the Czech Republic provoke Russian military strategists. Putin has proposed using their already operative radar base in Azerbajian (See “Azeri radar eyed for US shield”, BBC) in exchange for information from the US system. The US/NATO proposed TMD (theater missile defense) will also integrate early warning systems for short-range missiles in southern Europe. Is the race for space awareness and the weaponization of space inevitable?

The justification for the missile shield is the potential threat of long range missiles from Iran and North Korea (See “N-Korea test fires missile”, BBC). Military experts predict that with the current progress of nuclear research and missile technology available to Iran they will pose a threat to the US in 2015. NATO and Russia co-operate in certain military matters through the Russia-Nato Council but has increasingly been in conflict over the Iranian nuclear program and the European missile shield. (See “Russia-NATO: A marriage of convenience”, RIA Novosti). Russia has also demonstrated the ineffectiveness of the missile shield by launching their RS-24 multiple missile system carrying 10 warheads (See “RS-24 Missiles to replace old systems within next few years”, Interfax).

Terrestrial radars need to be complemented by satellites to keep track of missile launches across the planet (so called “boost phase interceptors”, see “Missile defense, satellites and politics”, The Space Review) to ensure complete space awareness. The Chinese Space Agency tested an anti-satellite missile earlier this year (See “Pentagon says China’s anti-satellite test posed a threat to nations”, AP). The move towards a hot space war could be imminent. The official press release was the only information given from Chinese authorities. The secrecy surrounding space capabilities was recently challenged by French authorities when they discovered 20–30 unregistered US surveillance satellites. (See “French says ‘non’ to U.S. Disclosure of Secret Satellites”, Space.com).

The race for the control of space is threatening to destabilize established military power structures. Secrecy is not the way of solving imbalances in international relations. Space is a part of the “commons” and should be dealt with accordingly. I propose an open source approach to the space awareness problematique. There are several approaches to distributed space awareness, e.g. launching private satellites for surveillance and distribution of real-time satellite imagery in order to counter a military space race. The alternative is a UN led control organization like the IAEA.

Other organizations like the Lifeboat Foundation could also play an important role in developing a threat reduction system for the ongoing cold space war.

Five evolutionary stages of pathogen progression from animals to human transmission have been identified A proposed monitoring system of viral chatter has been proposed to provide warning of new diseases before they spread to humans.

In 1999, Wolfe began field work in the jungles of Cameroon to track “viral chatter,” or the regular transmission of diseases from animals to people, usually without further spread among humans. By monitoring the habits and the blood pathologies of bushmeat hunters and their kills, Wolfe and his team have identified at least three previously unknown retroviruses from the same family as HIV, as well as promoted safe practices for handling animals and animal carcasses.

“The Cameroon project demonstrated that it’s possible to collect information on viral transmission under very difficult circumstances from these highly exposed people,” Wolfe said.

With Cameroon as a prototype and a $2.5 million National Institutes of Health Pioneer Award as seed money, Wolfe has gone on to create a network of virus-discovery projects that monitor hunters, butchers, and wildlife trade and zoo workers in some of the world’s most remote viral hotspots. The network of a dozen sites in China, the Democratic Republic of Congo, Malaysia, Laos, Madagascar and Paraguay include source locations for such emerging diseases as SARS, avian flu, Nipah, Ebola and monkeypox.

There are more details of the five stages and a proposed study of the detailed origins of disease.

Wolfe and his colleagues begin by identifying five intermediate stages through which a pathogen exclusively infecting animals must travel before exclusively infecting humans. The research team identifies no inevitable progression of microbes from Stage 1 to Stage 5 and notes that many microbes remain stuck at a specific stage. The journey is arduous, and pathogens rarely climb through all five stages:

Stage 1. Agent only in animals: A microbe that is present in animals but not detected in humans under natural conditions. Examples include most malarial plasmodia.

Stage 2. Primary infection: Animal pathogens that are transmitted from animals to humans as a primary infection but not transmitted among humans. Examples include anthrax, rabies and West Nile virus.

Stage 3. Limited outbreak: Animal pathogens that undergo only a few cycles of secondary transmission among humans so that occasional human outbreaks triggered by a primary infection soon die out. Examples include the Ebola, Marburg and monkeypox viruses.

Stage 4. Long outbreak: A disease that exists in animals and has a natural cycle of infecting humans by primary transmission from the animal host but that also undergoes long sequences of secondary transmission between humans without involvement of animals. Examples include Chagas disease, yellow fever, dengue fever, influenza A, cholera, typhus and West African sleeping sickness.

Stage 5. Exclusive human agent: A pathogen exclusive to humans that involves either an ancestral pathogen present in a common ancestor of chimps and humans or involves a more recent pathogen that evolved into a specialized human pathogen. Examples include HIV, measles, mumps, rubella, smallpox and syphilis.

In addition, the team examines 25 diseases of important historic consequence to humans. Of the 25 diseases, 17 impose the heaviest world burden today: hepatitis B, influenza A, measles, pertussis, rotavirus A, syphilis, tetanus, tuberculosis, AIDS, Chagas disease, cholera, dengue hemorrhagic fever, East and West African sleeping sicknesses, falciparum and vivax malarias, and visceral leishmaniasis.

Eight more imposed heavy burdens in the past but have been reined in or eradicated thanks to modern medicine and public health practices: temperate diphtheria, mumps, plague, rubella, smallpox, typhoid, typhus and tropical yellow fever. Except for AIDS, dengue fever and cholera, most of the 25 have been important for more than two centuries.

The research team considered the varied pathologies of diseases originating in temperate (15) versus tropical (10) regions, as well as differing pathogen and geographic origins. Among the conclusions:

– Most of the temperate diseases, but none of the tropical diseases, are so-called “crowd epidemic diseases,” occurring locally as a brief epidemic and capable of persisting regionally only in large human populations. Most of the diseases originating in temperate climates convey long-lasting immunity.

– Eight of the 15 temperate diseases probably or possibly reached humans from domestic animals, three more from apes or rodents, and the other four came from still unknown sources. Thus the rise of agriculture, starting 11,000 years ago, plays multiple roles in the evolution of animal pathogens into human pathogens.

– Most tropical diseases have originated in wild, non-human primates. These animals are most closely related to humans and thus pose the weakest species barriers to pathogen transfer.

– Animal-derived human pathogens virtually all arose from pathogens of other warm-blooded vertebrates plus, in two cases, birds.

– Nearly all of the 25 major human pathogens originated in the Old Word (Africa, Europe and Asia), facilitating the conquest of the New World. Chagas disease is the only one of the 25 that clearly originated in the New World, while the debate is unresolved for syphilis and tuberculosis.

–Far more temperate diseases arose in the Old World because far more animals that furnish ancestral pathogens were domesticated there. Far fewer tropical diseases arose in the New World because the genetic distance is greater between humans and primates in this part of the globe.

The conclusions of the review illustrate large gaps in the understanding of the origins of even established major infectious diseases. Almost all studies reviewed were based on specimens collected from domestic animals, plus a few wild animal species.

The researchers propose an “origins initiative” aimed at identifying the origins of a dozen of the most important human infectious diseases as well as a global early warning system to monitor pathogens emerging from animals to humans.

This work is relevant to the lifeboat bioshield

In a report to be published in the peer-reviewed journal PLoS Computational Biology and currently available online, Sally Blower, a professor at the Semel Institute for Neuroscience and Human Behavior at UCLA, and Romulus Breban and Raffaele Vardavas, postdoctoral fellows in Blower’s research group, used novel mathematical modeling techniques to predict that current health policy — based on voluntary vaccinations — is not adequate to control severe flu epidemics and pandemics unless vaccination programs offer incentives to individuals.

According to the researchers, the severity of such a health crisis could be reduced if programs were to provide several years of free vaccinations to individuals who pay for only one year. Interestingly, however, some incentive programs could have the opposite effect. Providing free vaccinations for entire families, for example, could actually increase the frequency of severe epidemics. This is because when the head of the household makes a choice — flu shots or no flu shots — on behalf of all the other household members, there is no individual decision-making, and adaptability is decreased.

While other models have determined what proportion of the population would need to be vaccinated in order to prevent a pandemic, none of these models have shown whether this critical coverage can actually be reached. What has been missing, according to Blower, a mathematical and evolutionary biologist, is the human factor.

The human factor involves two biological characteristics, “memory and how adaptable people can be,” Blower said. “These characteristics drive human behavior.”

The model Blower’s team developed is inspired by game theory, used in economics to predict how non-communicating, selfish individuals reach a collective behavior with respect to a common dilemma by adapting to what they think are other people’s decisions. The group modeled each individual’s strategy for making yearly vaccination decisions as an adaptive process of trial and error. They tracked both individual-level decisions and population-level variables — that is, the yearly vaccine coverage level and influenza prevalence, where prevalence is defined as the proportion of the population that is infected. The individual-level model was based on the human biological attributes of memory and adaptability.

The Lifeboat Foundation has the bioshield project

Cities that quickly closed schools and discouraged public gatherings had fewer deaths from the great flu pandemic in 1918 than cities that did not, researchers reported on Monday. Experts agree that a pandemic of some virus, most likely influenza, is almost 100 percent certain. What is not certain is when it will strike and which virus it will be.

In Kansas City, no more than 20 people could attend weddings or funerals. New York mandated staggered shifts at factories. In Seattle, the mayor told people to wear face masks.

No single action worked on its own, the researchers found, it was the combination of measures that saved lives. Peak death rates can be 50% to eight times lower. St. Louis authorities introduced “a broad series of measures designed to promote social distancing” as soon as flu showed up. Philadelphia downplayed the 1918 flu.

Philadelphia ended up with a peak death rate of 257 people per 100,000 population per week. St. Louis had just 31 per 100,000 at the peak.

No good vaccine would be available for months, and drugs that treat influenza are in very short supply.

So experts are looking at what they call non-pharmacologic interventions — ways to prevent infection without drugs. They hope this can buy time while companies make and distribute vaccines and drugs.

Because the virus is spread by small droplets passed within about three feet (1 meter) from person to person, keeping people apart is considered a possible strategy.

The U.S. government flu plan calls for similar measures, including allowing employees to stay home for weeks or even months, telecommuting and closing schools and perhaps large office buildings.

The Lifeboat Foundation has a bioshield project