Toggle light / dark theme

Tracking your health is a growing phenomenon. People have historically measured and recorded their health using simple tools: a pencil, paper, a watch and a scale. But with custom spreadsheets, streaming wifi gadgets, and a new generation of people open to sharing information, this tracking is moving online. Pew Internet reports that 70–80% of Internet users go online for health reasons, and Health 2.0 websites are popping up to meet the demand.

David Shatto, an online health enthusiast, wrote in to CureTogether, a health-tracking website, with a common question: “I’m ‘healthy’ but would be interested in tracking my health online. Not sure what this means, or what a ‘healthy’ person should track. What do you recommend?”

There are probably as many answers to this question as there are people who track themselves. The basic measure that apply to most people are:
- sleep
- weight
- calories
- exercise
People who have an illness or condition will also measure things like pain levels, pain frequency, temperature, blood pressure, day of cycle (for women), and results of blood and other biometric tests. Athletes track heart rate, distance, time, speed, location, reps, and other workout-related measures.

Another answer to this question comes from Karina, who writes on Facebook: “It’s just something I do, and need to do, and it’s part of my life. So, in a nutshell, on most days I write down what I ate and drank, how many steps I walked, when I went to bed and when I woke up, my workouts and my pain/medication/treatments. I also write down various comments about meditative activities and, if it’s extreme, my mood.”

David’s question is being asked by the media too. Thomas Goetz, deputy editor of Wired Magazine, writes about it in his blog The Decision Tree. Jamin Brophy-Warren recently wrote about the phenomenon of personal data collection in the Wall Street Journal, calling it the “New Examined Life”. Writers and visionaries Kevin Kelly and Gary Wolf have started a growing movement called The Quantified Self, which holds monthly meetings about self-tracking activities and devices. And self-experimenters like David Ewing Duncan (aka “Experimental Man”) and Seth Roberts (of the “Shangri-La Diet”) are writing books about their experiences.

In the end, what to track really depends on what each person wants to get out of it:
- Greater self-awareness and a way to stick to New Year’s resolutions?
- Comparing data to other self-trackers to see where you fit on the health curve?
- Contributing health data to research into finding cures for chronic conditions?

Based on answers to these questions, you can come up with your own list of things to track, or take some of the ideas listed above. Whatever the reason, tracking is the new thing to do online and can be a great way to optimize and improve your health.

Alexandra Carmichael is co-founder of CureTogether, a Mountain View, CA startup that launched in 2008 to help people optimize their health by anonymously comparing symptoms, treatments, and health data. Its members track their health online and share their experience with 186 different health conditions. She is also the author of The Collective Well and Ecnalab blogs, and a guest blogger at the Quantified Self.

In the volume “Global catastrophic risks” you could find excellent article of Milan Circovic “Observation selection effects and global catastrophic risks”, where he shows that we can’t use information from past records to estimating future rate of global catastrophes.
This has one more consequence which I investigate in my article: “Why antropic principle stops to defend us. Observation selection, future rate of natural disasters and fragility of our environment” — that is we could be in the end of the long period of stability, and some catastrophes may be long overdue and what is most important we could underestimate fragility of our environment which could be on the verge of bifurcation. It is because origination of intellectual life on the Earth is very rare event and it means that some critical parameters may lay near their bounds of stability and small anthropogenic influences could start catastrophic process in this century.

http://www.scribd.com/doc/8729933/Why-antropic-principle-stops-to-defend-us-Observation-selection-and-fragility-of-our-environment–

Why antropic principle stops to defend us
Observation selection, future rate of natural disasters and fragility of our environment.

Alexei Turchin,
Russian Transhumanist movement

The previous version of this article was published on Russian in «Problems of management of risks and safety», Works of Institute of the System Analysis of the Russian Academy of Sciences, v. 31, 2007, p. 306–332.

Abstract:

The main idea of this article is not only that observation selection leads to underestimation of future rate of natural disasters, but that our environment is much more fragile to antropic influences (like overinflated toy balloon), also because of observation selection, and so we should much more carefully think about global warming and deep earth drilling.
The main idea of antropic principle (AP) is that our Universe has qualities that allow existence of the observers. In particular this means that global natural disasters that could prevent developing of intellectual life on the Earth never happened here. This is true only for the past but not for the future. So we cannot use information about frequency of global natural disasters in the past for extrapolation it to the future, except some special cases then we have additional information, as Circovic shoes in his paper. Therefore, an observer could find that all the important parametres for his/her survival (sun, temperature, asteroid risk etc.) start altogether inexplicably and quickly deteriorating – and possibly we could already find the signs of this process. In a few words: The anthropic principle has stopped to ‘defend’ humanity and we should take responsibility for our survival. Moreover, as origination of intellectual life on the Earth is very rare event it means that some critical parameters may lay near their bounds of stability and small antropogenic influences could start catastrophic process in this century.

I wrote an essay on the theme of the possibility of artificial initiation and fusion explosion of giants planets and other objects of Solar system. It is not a scientific article, but an atempt to collect all nesessary information about this existential risk. I conclude that it could not be ruled out as technical possibility, and could be made later as act of space war, which could clean entire Solar system.

Where are some events which are very improbable, but which consequence could be infinitely large (e.g. black holes on LHC.) Possibility of nuclear ignition of self-containing fusion reaction in giant planets like Jupiter and Saturn which could lead to the explosion of the planet, is one of them.

Inside the giant planets is thermonuclear fuel under high pressure and at high density. This density for certain substances is above (except water, perhaps) than the density of these substances on Earth. Large quantities of the substance would not have fly away from reaction zone long enough for large energy relize. This fuel has never been involved in fusion reactions, and it remained easy combustible components, namely, deuterium, helium-3 and lithium, which have burned at all in the stars. In addition, the subsoil giant planets contain fuel for reactions, which may prompt an explosive fire — namely, the tri-helium reaction (3 He 4 = C12) and for reactions to the accession of hydrogen to oxygen, which, however, required to start them much higher temperature. Substance in the bowels of the giant planets is a degenerate form of a metal sea, just as the substance of white dwarfs, which regularly takes place explosive thermonuclear burning in the form of helium flashes and the flashes of the first type of supernova.
The more opaque is environment, the greater are the chances for the reaction to it, as well as less scattering, but in the bowels of the giant planets there are many impurities and can be expected to lower transparency. Gravitational differentiation and chemical reactions can lead to the allocation of areas within the planet that is more suitable to run the reaction in its initial stages.

The stronger will be an explosion of fuse, the greater will be amount of the initial field of burning, and the more likely that the response would be self-sustaining, as the energy loss will be smaller and the number of reaction substances and reaction times greater. It can be assumed that if at sufficiently powerful fuse the reaction will became self-sustaining.

Recently Galileo spacecraft was drawn in the Jupiter. Galileo has nuclear pellets with plutonium-238 which under some assumption could undergo chain reaction and lead to nuclear explosion. It is interesting to understand if it could lead to the explosion of giant planet. Spacecraft Cassini may soon enter Saturn with unknown consequences. In the future deliberate ignition of giant planet may become a mean of space war. Such event could sterilize entire Solar system.

Scientific basis for our study could be found in the article “Necessary conditions for the initiation and propagation of nuclear detonation waves in plane atmospheras”.
Tomas Weaver and A. Wood, Physical review 20 – 1 Jule 1979,
http://www.lhcdefense.org/pdf/LHC%20-%20Sancho%20v.%20Doe%20-%20Atmosphere%20Ignition%20-%202%20-%20Wood_AtmIgnition-1.pdf

It rejected the possibility of extending the thermonuclear detonation in the Earth’s atmosphere in Earth’s oceans to balance the loss of radiation (one that does not exclude the possibility of reactions, which take little space comparing the amount of earthly matter — but it’s enough to disastrous consequences and human extinction.)

There it is said: “We, therefore, conclude that thermonuclear-detonation waves cannot propagate in the terrestrial ocean by any mechanism by an astronomically large margin.

It is worth noting, in conclusion, that the susceptability to thermonuclear detonation of a large body of hydrogenous material is an ex¬ceedingly sensitive function of its isotopic com¬position, and, specifically, to the deuterium atom fraction, as is implicit in the discussion just preceding. If, for instance, the terrestrial oceans contained deuterium at any atom fraction greater than 1:300 (instead of the actual value of 1: 6000), the ocean could propagate an equilibrium thermonuclear-detonation wave at a temperature £2 keV (although a fantastic 10**30 ergs—2 x 10**7 MT, or the total amount of solar energy incident on the Earth for a two-week period—would be required to initiate such a detonation at a deuter¬*ium concentration of 1: 300). Now a non-neg-ligible fraction of the matter in our own galaxy exists at temperatures much less than 300 °K, i.e., the gas-giant planets of our stellar system, nebulas, etc. Furthermore, it is well known that thermodynamically-governed isotopic fractionation ever more strongly favors higher relative concentration of deuterium as the temperature decreases, e.g., the D:H concentration ratio in the ~10**2 К Great Nebula in Orion is about 1:200.45 Finally, orbital velocities of matter about the galactic center of mass are of the order of 3 x 10**7 cm /sec at our distance from the galactic core.

It is thus quite conceivable that hydrogenous matter (e.go, CH4, NH3, H2O, or just H2) relatively rich in deuterium (1 at. %) could accumulate at its normal, zero-pressure density in substantial thicknesses or planetary surfaces, and such layering might even be a fairly common feature of the colder, gas-giant planets. If thereby highly enriched in deuterium (£10 at. %), thermonuclear detonation of such layers could be initiated artificially with attainable nuclear explosives. Even with deuterium atom fractions approaching 0.3 at. % (less than that observed over multiparsec scales in Orion), however, such layers might be initiated into propagating thermonuclear detonation by the impact of large (diam 10**2 m), ultra-high velocity (^Зх 10**7 cm/sec) meteors or comets originating from nearer the galactic center. Such events, though exceedingly rare, would be spectacularly visible on distance scales of many parsecs.”

Full text of my essay is here: http://www.scribd.com/doc/8299748/Giant-planets-ignition

November 14, 2008
Computer History Museum, Mountain View, CA

http://ieet.org/index.php/IEET/eventinfo/ieet20081114/

Organized by: Institute for Ethics and Emerging Technologies, the Center for Responsible Nanotechnology and the Lifeboat Foundation

A day-long seminar on threats to the future of humanity, natural and man-made, and the pro-active steps we can take to reduce these risks and build a more resilient civilization. Seminar participants are strongly encouraged to pre-order and review the Global Catastrophic Risks volume edited by Nick Bostrom and Milan Cirkovic, and contributed to by some of the faculty for this seminar.

This seminar will precede the futurist mega-gathering Convergence 08, November 15–16 at the same venue, which is co-sponsored by the IEET, Humanity Plus (World Transhumanist Association), the Singularity Institute for Artificial Intelligence, the Immortality Institute, the Foresight Institute, the Long Now Foundation, the Methuselah Foundation, the Millenium Project, Reason Foundation and the Accelerating Studies Foundation.

SEMINAR FACULTY

  • Nick Bostrom Ph.D., Director, Future of Humanity Institute, Oxford University
  • Jamais Cascio, research affiliate, Institute for the Future
  • James J. Hughes Ph.D., Exec. Director, Institute for Ethics and Emerging Technologies
  • Mike Treder, Executive Director, Center for Responsible Nanotechnology
  • Eliezer Yudkowsky, Research Associate. Singularity Institute for Artificial Intelligence
  • William Potter Ph.D., Director, James Martin Center for Nonproliferation Studies

REGISTRATION:
Before Nov 1: $100
After Nov 1 and at the door: $150

The Singularity Institute for Artificial Intelligence has announced the details of The Singularity Summit 2008. The event will be held October 25, 2008 at the Montgomery Theater in San Jose, California. Previous summits have featured Nick Bostrom, Eric Drexler, Douglas Hofstadter, Ray Kurzweil, and Peter Thiel.

Keynote speakers include Ray Kurzweil, author of The Singularity is Near, and Justin Rattner, CTO of Intel. At the Intel Developer Forum on August 21, 2008, Rattner explained why he thinks the gap between humans and machines will close by 2050. “Rather than look back, we’re going to look forward 40 years,” said Rattner. “It’s in that future where many people think that machine intelligence will surpass human intelligence.”

Other featured speakers include:

  • Dr. Ben Goertzel, CEO of Novamente, director of research at SIAI
  • Dr. Marvin Minsky
  • Nova Spivack, CEO of Radar Networks, creator of Twine.com
  • Dr. Vernor Vinge
  • Eliezer Yudkowsky

You can find a comprehensive list of other upcoming Singularity and Artificial Intelligence events here.

Something to post to your websites and to vote online.

Aubrey de Grey can get $1.5 million for the Methuselah Foundation if enough people vote.

Voting ends September 1st, take a second to vote now.
Any US Amex cardmember or US resident (who makes a guest account) can vote.

Here is the page where you can vote “nominate”

The Methuselah Foundation Page with some more details if you are interested, to vote though you only need click on the above link…

The UK’s Guardian today published details of a report produced by Britain’s Security Service (MI5) entitled, ‘Understanding radicalization and violent extremism in the UK’. The report is from MI5’s internal behavioral analysis unit and contains within it some interesting and surprising conclusions. The Guardian report covers many of these in depth (so no need to go over here) but one point, which is worth highlighting is the claim made within the report that religion is and was not a contributory factor in the radicalization of the home-grown terrorist threat that the UK faces. In fact, the report goes on to state that a strong religious faith protects individuals from the effects of extremism.This viewpoint is one that is gathering strength and coincides with an article written by Martin Amis in the Wall Street Journal, which also argues that ‘terrorism’s new structure’ is about the quest for fame and thirst for power, with religion simply acting as a “means of mobilization”.

All of this also tends to agree with the assertion made by Philip Bobbit in ‘Terror and Consent’, that al-Qaeda is simply version 1.0 of a new type of terrorism for the 21st century. This type of terrorism is attuned to the advantages and pressures of a market based world and acts more like a Silicon Valley start-up company than the Red Brigades — being flexible, fast moving and wired — taking advantage of globalization to pursue a violent agenda.

This all somewhat begs the question of, what next? If al-Qaeda is version 1.0 what is 2.0? This of course is hard to discern but looking at the two certain trends, which will shape humanity over the next 20 years — urbanization and virtualization — throws up some interesting potential opponents who are operating today. The road to mass urbanization is currently being highlighted by the 192021 project (19 cities, 20 million people in the 21st century) and amongst other things, points to the large use of slum areas to grow the cities of the 21st century. Slum areas are today being globally exploited from Delhi to Sao Paulo by Nigerian drug organizations that are able to recruit the indigenous people to build their own cities within cities. This kind of highly profitable criminal activity in areas beyond the vision of government is a disturbing incubator.

150px-anonymousdemotivator.jpg
Increased global virtualization complements urbanization as well as standing alone. Virtual environments provide a useful platform for any kind of real-life extremist (as is now widely accepted) but it is the formation of groups within virtual spaces that then spill-out into real-space that could become a significant feature of the 21st century security picture. This is happening with, ‘Project Chanology’ a group that was formed virtually with some elements of the Anonymous movement in order to disrupt the Church of Scientology. While Project Chanology (WhyWeProtest Website)began as a series of cyber actions directed at Scientology’s website, it is now organizing legal protests of Scientology buildings. A shift from the virtual to the real. A more sinister take on this is the alleged actions of the Patriotic Nigras — a group dedicated to the disruption of Second Life, which has reportedly taken to using the tactic of ‘swatting’ — which is the misdirection of armed police officers to a victim’s home address. A disturbing spill-over into real-space. Therefore, whatever pattern future terrorist movements follow, there are signs that religion will play a peripheral rather than central role.

Originally posted on the Counterterrorism blog.

Researchers from Imperial College in London, England, isolated the receptor in the lungs that triggers the immune overreaction to flu.

With the receptor identified, a therapy can be developed that will bind to the receptor, preventing the deadly immune response. Also, by targeting a receptor in humans rather than a particular strain of flu, therapies developed to exploit this discovery would work regardless of the rapid mutations that beguile flu vaccine producers every year.

The flu kills 250,000 to 500,000 people in an average year with epidemics reaching 1 to 2 million deaths (other than the spanish flu which was more severe

This discovery could lead to treatments which turn off the inflammation in the lungs caused by influenza and other infections, according to a study published today in the journal Nature Immunology. The virus is often cleared from the body by the time symptoms appear and yet symptoms can last for many days, because the immune system continues to fight the damaged lung. The immune system is essential for clearing the virus, but it can damage the body when it overreacts if it is not quickly contained.

The immune overreaction accounts for the high percentage of young, healthy people who died in the vicious 1918 flu pandemic. While the flu usually kills the very young or the sickly and old, the pandemic flu provoked healthy people’s stronger immune systems to react even more profoundly than usual, exacerbating the symptoms and ultimately causing between 50 and 100 million deaths world wide. These figures from the past make the new discovery that much more important, as new therapies based on this research could prevent a future H5N1 bird flu pandemic from turning into a repeat of the 1918 Spanish flu.

In the new study, the researchers gave mice infected with influenza a mimic of CD200, or an antibody to stimulate CD200R, to see if these would enable CD200R to bring the immune system under control and reduce inflammation.

The mice that received treatment had less weight loss than control mice and less inflammation in their airways and lung tissue. The influenza virus was still cleared from the lungs within seven days and so this strategy did not appear to affect the immune system’s ability to fight the virus itself.

The researchers hope that in the event of a flu pandemic, such as a pandemic of H5N1 avian flu that had mutated to be transmissible between humans, the new treatment would add to the current arsenal of anti-viral medications and vaccines. One key advantage of this type of therapy is that it would be effective even if the flu virus mutated, because it targets the body’s overreaction to the virus rather than the virus itself.

In addition to the possible applications for treating influenza, the researchers also hope their findings could lead to new treatments for other conditions where excessive immunity can be a problem, including other infectious diseases, autoimmune diseases and allergy.

Cross posted from Next big future by Brian Wang, Lifeboat foundation director of Research

I am presenting disruption events for humans and also for biospheres and planets and where I can correlating them with historical frequency and scale.

There has been previous work on categorizing and classifying extinction events. There is Bostroms paper and there is also the work by Jamais Cascio and Michael Anissimov on classification and identifying risks (presented below).

A recent article discusses the inevtiable “end of societies” (it refers to civilizations but it seems to be referring more to things like the end of the roman empire, which still ends up later with Italy, Austria Hungary etc… emerging)

The theories around complexity seem me that to be that core developments along connected S curves of technology and societal processes cap out (around key areas of energy, transportation, governing efficiency, agriculture, production) and then a society falls back (soft or hard dark age, reconstitutes and starts back up again).

Here is a wider range of disruption. Which can also be correlated to frequency that they have occurred historically.

High growth drop to Low growth (short business cycles, every few years)
Recession (soft or deep) Every five to fifteen years.
Depressions (50−100 years, can be more frequent)

List of recessions for the USA (includes depressions)

Differences recession/depression

Good rule of thumb for determining the difference between a recession and a depression is to look at the changes in GNP. A depression is any economic downturn where real GDP declines by more than 10 percent. A recession is an economic downturn that is less severe. By this yardstick, the last depression in the United States was from May 1937 to June 1938, where real GDP declined by 18.2 percent. Great Depression of the 1930s can be seen as two separate events: an incredibly severe depression lasting from August 1929 to March 1933 where real GDP declined by almost 33 percent, a period of recovery, then another less severe depression of 1937–38. (Depressions every 50–100 years. Were more frequent in the past).

Dark age (period of societal collapse, soft/light or regular)
I would say the difference between a long recession and a dark age has to do with breakdown of societal order and some level of population decline / dieback, loss of knowledge/education breakdown. (Once per thousand years.)

I would say that a soft dark age is also something like what China had from the 1400’s to 1970.
Basically a series of really bad societal choices. Maybe something between depressions and dark age or something that does not categorize as neatly but an underperformance by twenty times versus competing groups. Perhaps there should be some kind of societal disorder, levels and categories of major society wide screw ups — historic level mistakes. The Chinese experience I think was triggered by the renunciation of the ocean going fleet, outside ideas and tech, and a lot of other follow on screw ups.

Plagues played a part in weakening the Roman and Han empires.

Societal collapse talk which includes Toynbee analysis.

Toynbee argues that the breakdown of civilizations is not caused by loss of control over the environment, over the human environment, or attacks from outside. Rather, it comes from the deterioration of the “Creative Minority,” which eventually ceases to be creative and degenerates into merely a “Dominant Minority” (who forces the majority to obey without meriting obedience). He argues that creative minorities deteriorate due to a worship of their “former self,” by which they become prideful, and fail to adequately address the next challenge they face.

My take is that the Enlightenment would strengthened with a larger creative majority, where everyone has a stake and capability to creatively advance society. I have an article about who the elite are now.

Many now argue about how dark the dark ages were not as completely bad as commonly believed.
The dark ages is also called the Middle Ages

Population during the middle ages

Between dark age/social collapse and extinction. There are levels of decimation/devastation. (use orders of magnitude 90+%, 99%, 99.9%, 99.99%)

Level 1 decimation = 90% population loss
Level 2 decimation = 99% population loss
Level 3 decimation = 99.9% population loss

Level 9 population loss (would pretty much be extinction for current human civilization). Only 6–7 people left or less which would not be a viable population.

Can be regional or global, some number of species (for decimation)

Categorizations of Extinctions, end of world categories

Can be regional or global, some number of species (for extinctions)

== The Mass extinction events have occurred in the past (to other species. For each species there can only be one extinction event). Dinosaurs, and many others.

Unfortunately Michael’s accelerating future blog is having some issues so here is a cached link.

Michael was identifying manmade risks
The Easier-to-Explain Existential Risks (remember an existential risk
is something that can set humanity way back, not necessarily killing
everyone):

1. neoviruses
2. neobacteria
3. cybernetic biota
4. Drexlerian nanoweapons

The hardest to explain is probably #4. My proposal here is that, if
someone has never heard of the concept of existential risk, it’s
easier to focus on these first four before even daring to mention the
latter ones. But here they are anyway:

5. runaway self-replicating machines (“grey goo” not recommended
because this is too narrow of a term)
6. destructive takeoff initiated by intelligence-amplified human
7. destructive takeoff initiated by mind upload
8. destructive takeoff initiated by artificial intelligence

Another classification scheme: the eschatological taxonomy by Jamais
Cascio on Open the Future. His classification scheme has seven
categories, one with two sub-categories. These are:

0:Regional Catastrophe (examples: moderate-case global warming,
minor asteroid impact, local thermonuclear war)
1: Human Die-Back (examples: extreme-case global warming,
moderate asteroid impact, global thermonuclear war)
2: Civilization Extinction (examples: worst-case global warming,
significant asteroid impact, early-era molecular nanotech warfare)
3a: Human Extinction-Engineered (examples: targeted nano-plague,
engineered sterility absent radical life extension)
3b: Human Extinction-Natural (examples: major asteroid impact,
methane clathrates melt)
4: Biosphere Extinction (examples: massive asteroid impact,
“iceball Earth” reemergence, late-era molecular nanotech warfare)
5: Planetary Extinction (examples: dwarf-planet-scale asteroid
impact, nearby gamma-ray burst)
X: Planetary Elimination (example: post-Singularity beings
disassemble planet to make computronium)

A couple of interesting posts about historical threats to civilization and life by Howard Bloom.

Natural climate shifts and from space (not asteroids but interstellar gases).

Humans are not the most successful life, bacteria is the most successful. Bacteria has survived for 3.85 billion years. Humans for 100,000 years. All other kinds of life lasted no more than 160 million years. [Other species have only managed to hang in there for anywhere from 1.6 million years to 160 million. We humans are one of the shortest-lived natural experiments around. We’ve been here in one form or another for a paltry two and a half million years.] If your numbers are not big enough and you are not diverse enough then something in nature eventually wipes you out.

Following the bacteria survival model could mean using transhumanism as a survival strategy. Creating more diversity to allow for better survival. Humans adapted to living under the sea, deep in the earth, in various niches in space, more radiation resistance,non-biological forms etc… It would also mean spreading into space (panspermia). Individually using technology we could become very successful at life extension, but it will take more than that for a good plan for human (civilization, society, species) long term survival planning.

Other periodic challenges:
142 mass extinctions, 80 glaciations in the last two million years, a planet that may have once been a frozen iceball, and a klatch of global warmings in which the temperature has soared by 18 degrees in ten years or less.

In the last 120,000 years there were 20 interludes in which the temperature of the planet shot up 10 to 18 degrees within a decade. Until just 10,000 years ago, the Gulf Stream shifted its route every 1,500 years or so. This would melt mega-islands of ice, put out our coastal cities beneath the surface of the sea, and strip our farmlands of the conditions they need to produce the food that feeds us.

The solar system has a 240-million-year-long-orbit around the center of our galaxy, an orbit that takes us through interstellar gas clusters called local fluff, interstellar clusters that strip our planet of its protective heliosphere, interstellar clusters that bombard the earth with cosmic radiation and interstellar clusters that trigger giant climate change.