Toggle light / dark theme

It’s been a while since anyone contributed a post on space exploration here on the Lifeboat blogs, so I thought I’d contribute a few thoughts on the subject of potential hazards to interstellar travel in the future — if indeed humanity ever attempts to explore that far in space.

It is only recently that the Voyager probes provided us with some idea of the nature of the boundary of our solar system with what is commonly referred to as the local fluff, The Local Interstellar Cloud, through which we have been travelling for the past 100,000 years or so, and which we will continue to travel through for another 10,000 or 20,000 years yet. The cloud has a temperate of about 6000°C — albeit very tenuous.

We are protected by the effects of the local fluff by the solar wind and the sun’s magnetic field, the front between the two just beyond the termination shock where the solar wind slows to subsonic velocities. Here, in the heliosheath, the solar wind becomes turbulent by its interaction with the interstellar medium, and keeping the interstellar medium at bay from the inners of the solar system, the region currently under study by the Voyager 1 and Voyager 2 space probes. It has been hypothesised that there may be a hydrogen wall further out between the bow shock and the heliopause composed of ISM interacting with the edge of the heliosphere, another obstacle to consider with interstellar travel.

The short end of the stick is that what many consider ‘open space’ to traverse once we get beyond the Kuiper belt may in fact be many more mission-threatening obstacles to traverse to reach beyond our solar system. Opinions welcome. I am not an expert on this.

High energy experiments like the LHC at the nuclear research centre CERN are extreme energy consumers (needing the power of a nuclear plant). Their construction is extremely costly (presently 7 Billion Euros) and practical benefits are not in sight. The experiments eventually pose existential risks and these risks have not been properly investigated.

It is not the first time that CERN announces record energies and news around April 1 – apparently hoping that some critique and concerns about the risks could be misinterpreted as an April joke. Additionally CERN regularly starts up the LHC at Easter celebrations and just before week ends, when news offices are empty and people prefer to have peaceful days with their friends and families.

CERN has just announced new records in collision energies at the LHC. And instead of conducting a neutral risk assessment, the nuclear research centre plans costly upgrades of its Big Bang machine. Facing an LHC upgrade in 2013 for up to CHF 1 Billion and the perspective of a Mega-LHC in 2022: How long will it take until risk researchers are finally integrated in a neutral safety assessment?

There are countless evidences for the necessity of an external and multidisciplinary safety assessment of the LHC. According to a pre-study in risk research, CERN fits less than a fifth of the criteria for a modern risk assessment (see the press release below). It is not acceptable that the clueless member states point at the operator CERN itself, while this regards its self-set security measures as sufficient, in spite of critique from risk researchers, continuous debates and the publication of further papers pointing at concrete dangers and even existential risks (black holes, strangelets) eventually arising from the experiments sooner or later. Presently science has to admit that the risk is disputed and basically unknown.

It will not be possible to keep up this ostrich policy much longer. Especially facing the planned upgrades of the LHC, CERN will be confronted with increasing critique from scientific and civil side that the most powerful particle collider has yet not been challenged in a neutral and multidisciplinary safety assessment. CERN has yet not answered to pragmatic proposals for such a process that also should constructively involve critics and CERN. Also further legal steps from different sides are possible.

The member states that are financing the CERN budget, the UN or private funds are addressed to provide resources to finally initiate a neutral and multidisciplinary risk assessment.

German version of this article published in Oekonews: http://www.oekonews.at/index.php?mdoc_id=1069458

Related LHC-Critique press release and open letter to CERN:

https://lifeboat.com/blog/2012/02/lhc-critique-press-release-feb-13-2012-cern-plans-mega-particle-collider-communication-to-cern-for-a-neutral-and-multi-disciplinary-risk-assessment-before-any-lhc-upgrade

Typical physicist’s April joke on stable black holes at the LHC (April 1 2012, German): http://www.scienceblogs.de/hier-wohnen-drachen/2012/04/stabiles-minischwarzes-loch-aus-higgsteilchen-erzeugt.php

Latest publications of studies demonstrating risks arising from the LHC experiment:

Prof Otto E. Rössler: http://www.academicjournals.org/AJMCSR/PDF/pdf2012/Feb/9%20Feb/Rossler.pdf

Thomas Kerwick B.Tech. M.Eng. Ph.D.: http://www.vixra.org/abs/1203.0055

Brief summary of the basic problem by LHC-Kritik (still valid since Sep. 2008): http://lhc-concern.info/wp-content/uploads/2008/12/lhc-kritik-cern-1st-statement-summary-908.pdf

Detailed summary of the scientific LHC risk discussion by LHC-Kritik and ConCERNed International: http://lhc-concern.info/wp-content/uploads/2010/03/critical-revision-of-lhc-risks_concerned-int.pdf

We wish you happy Easter and hope for your support of our pragmatic proposals to urgently increase safety in these new fields of nuclear physics.

LHC Critique / LHC Kritik — Network for Safety at nuclear and sub-nuclear high energy Experiments.

www.LHC-concern.info

[email protected]

Tel.: +43 650 629 627 5

New Facebook group: http://www.facebook.com/groups/LHC.Critique/

- CERN’s annual meeting to fix LHC schedules in Chamonix: Increasing energies. No external and multi-disciplinary risk assessment so far. Future plans targeting at costly LHC upgrade in 2013 and Mega-LHC in 2022.

- COMMUNICATION to CERN – For a neutral and multi-disciplinary risk assessment before any LHC upgrade

According to CERN’s Chamonix workshop (Feb. 6–10 2012) and a press release from today: In 2012 the collision energies of the world’s biggest particle collider LHC should be increased from 3.5 to 4 TeV per beam and the luminosity is planned to be increased by a factor of 3. This means much more particle collisions at higher energies.

CERN plans to shut down the LHC in 2013 for about 20 months to do a very costly upgrade (for CHF 1 Billion?) to run the LHC at double the present energies (7 TeV per beam) afterwards.

Future plans: A High-Luminosity LHC (HL-LHC) is planned, “tentatively scheduled to start operating around 2022” — with a beam energy increased from 7 to 16.5 TeV(!):
http://cdsweb.cern.ch/journal/CERNBulletin/2012/06/News%20Articles/1423292?ln=en

One might really ask where this should lead to – sooner or later – without the risks being properly investigated. Many critics from different fields are severely alarmed.

For comparison: The AMS 2 experiment for directly measuring cosmic rays in the atmosphere operates on a scale around 1.5 TeV. Very high energetic cosmic rays have only been measured indirectly (their impulse). Sort, velocity, mass and origin of these particles are unknown. In any way, the number of collisions under the extreme and unprecedented artificial conditions at the LHC is of astronomical magnitudes higher than anywhere else in the nearer cosmos.

There were many talks on machine safety at the Chamonix meeting. The safety of humans and environment obviously were not an official topic. That’s why critics turned to CERN in an open letter:

———————————————————–
Communication on LHC Safety directed to CERN

For a neutral and multidisciplinary risk assessment to be done before any LHC upgrade

—————————-
Communiqué to CERN
—————————-

Dear management and scientists at CERN,

Astronomer and Leonardo-publisher Roger Malina recently emphasized that the main problem in research is that “curiosity is not neutral”. And he concluded: “There are certain problems where we cannot cloister the scientific activity in the scientific world, and I think we really need to break the model. I wish CERN, when they had been discussing the risks, had done that in an open societal context, and not just within the CERN context.”

Video of Roger Malina’s presentation at Ars Electronica, following prominent philosopher and leading constructivist Humberto Maturana’s remarkable lecture on science and “certainy”: http://www.youtube.com/watch?v=DOZS2qJrVkU

In the eyes of many critics a number of questions related to LHC safety are not ruled out and some of them have concrete and severe concerns. Also the comparability of the cosmic ray argument is challenged.

Australian risk researcher and ethicist Mark Leggett concludes in a paper that CERN meets less than a fifth of the criteria of a modern risk assessment:
http://lhc-concern.info/wp-content/uploads/2009/09/leggett_review_of_lsag_process_sept_1__09.pdf

Without getting into details of the LHC safety discussion – this article in the well-recognized Physics arXiv Blog (MIT’s Technology Review) states: “Black Holes, Safety, and the LHC Upgrade — If the LHC is to be upgraded, safety should be a central part of the plans.”

Similar to pragmatic critics, the author claims in his closing remarks: “What’s needed, of course, is for the safety of the LHC to be investigated by an independent team of scientists with a strong background in risk analysis but with no professional or financial links to CERN.”
http://www.technologyreview.com/blog/arxiv/27319/

The renowned Institute for Technology Assessment and Systems Analysis (ITAS) in Karlsruhe and other risk researchers have already signalized interest in cooperation. We think, in such a process, naturally also CERN and critics should be constructively involved.

Please act in favour of such a neutral and multi-disciplinary assessment, maybe already following the present Chamonix meeting. Even if you feel sure that there are no reasons for any concerns, this must be in your interest, while also being of scientific and public concern.

In the name of many others:
[…]
————————–
LHC-Kritik / LHC-Critique
www.LHC-concern.info

Direct link to this Communication to CERN:
http://lhc-concern.info/?page_id=139
Also published in “oekonews”: http://www.oekonews.at/index.php?mdoc_id=1067776

CERN press release from Feb 13 2012:
http://press.web.cern.ch/press/PressReleases/Releases2012/PR01.12E.html

“Badly designed to understand the Universe — CERN’s LHC in critical Reflection by great Philosopher H. Maturana and Astrophysicist R. Malina”:
https://lifeboat.com/blog/2012/02/badly-designed-to-understand-the-universe-cerns-lhc-in-critical-reflection-by-great-philosopher-h-maturana-and-astrophysicist-r-malina

“LHC-Kritik/LHC-Critique – Network for Safety at experimental sub-nuclear Reactors”, is a platform articulating the risks related to particle colliders and experimental high energy physics. LHC-Critique has conducted a number of detailed papers demonstrating the insufficiency of the present safety measures under well understandable perspectives and has still got a law suit pending at the European Court of Human Rights.

More info at LHC-Kritik / LHC-Critique:
www.LHC-concern.info
[email protected]
+43 650 629 627 5

Info on the outcomes of CERN’s annual meeting in Chamonix this week (Feb. 6–10 2012):

In 2012 LHC collision energies should be increased from 3.5 to 4 TeV per beam and the luminosity is planned to be highly increased. This means much more particle collisions at higher energies.

CERN plans to shut down the LHC in 2013 for about 20 months to do a very costly upgrade (CHF 1 Billion?) to run the LHC at 7 TeV per beam afterwards.

Future plans: A High-Luminosity LHC (HL-LHC) is planned, “tentatively scheduled to start operating around 2022” — with a beam energy increased from 7 to 16.5 TeV(!).

One might really ask where this should lead to – sooner or later – without the risks being properly investigated.

For comparison: The AMS experiment for directly measuring cosmic rays in the atmosphere operates on a scale around 1.5 TeV. Very high energetic cosmic rays have only been measured indirectly (their impulse). Sort, velocity, mass and origin of these particles are unknown. In any way, the number of collisions under the extreme and unprecedented artificial conditions at the LHC is of astronomical magnitudes higher than anywhere else in the nearer cosmos.

There were many talks on machine safety at the Chamonix meeting. The safety of humans and environment obviously were not an official topic. No reaction on the recent claim for a really neutral, external and multi-disciplinary risk assessment by now.

Official reports from the LHC performance workshop by CERN Bulletin:

http://cdsweb.cern.ch/journal/CERNBulletin/2012/06/News%20Articles/?ln=de

LHC Performance Workshop — Chamonix 2012:

https://indico.cern.ch/conferenceOtherViews.py?view=standard&confId=164089

Feb 10 2012: COMMUNICATION directed to CERN for a neutral and multidisciplinary risk assessment to be done before any LHC upgrade:

http://lhc-concern.info/?page_id=139

More info at LHC-Kritik / LHC-Critique: Network for Safety at experimental sub-nuclear Reactors:

www.LHC-concern.info

Famous Chilean philosopher Humberto Maturana describes “certainty” in science as subjective emotional opinion and astonishes the physicists’ prominence. French astronomer and “Leonardo” publisher Roger Malina hopes that the LHC safety issue would be discussed in a broader social context and not only in the closer scientific framework of CERN.

(Article published in “oekonews”: http://oekonews.at/index.php?mdoc_id=1067777 )

The latest renowned “Ars Electronica Festival” in Linz (Austria) was dedicated in part to an uncritical worship of the gigantic particle accelerator LHC (Large Hadron Collider) at the European Nuclear Research Center CERN located at the Franco-Swiss border. CERN in turn promoted an art prize with the idea to “cooperate closely” with the arts. This time the objections were of a philosophical nature – and they had what it takes.

In a thought provoking presentation Maturana addressed the limits of our knowledge and the intersubjective foundations of what we call “objective” and “reality.” His talk was spiked with excellent remarks and witty asides that contributed much to the accessibility of these fundamental philosophical problems: “Be realistic, be objective!” Maturana pointed out, simply means that we want others to adopt our point of view. The great constructivist and founder of the concept of autopoiesis clearly distinguished his approach from a solipsistic position.

Given Ars Electronica’s spotlight on CERN and its experimental sub-nuclear research reactor, Maturana’s explanations were especially important, which to the assembled CERN celebrities may have come in a mixture of an unpleasant surprise and a lack of relation to them.

During the question-and-answer period, Markus Goritschnig asked Maturana whether it wasn’t problematic that CERN is basically controlling itself and discarding a number of existential risks discussed related to the LHC — including hypothetical but mathematically demonstrable risks also raised — and later downplayed — by physicists like Nobel Prize winner Frank Wilczek, and whether he thought it necessary to integrate in the LHC safety assessment process other sciences aside from physics such as risk search. In response Maturana replied (in the video from about 1:17): “We human beings can always reflect on what we are doing and choose. And choose to do it or not to do it. And so the question is, how are we scientists reflecting upon what we do? Are we taking seriously our responsibility of what we do? […] We are always in the danger of thinking that, ‘Oh, I have the truth’, I mean — in a culture of truth, in a culture of certainty — because truth and certainty are not as we think — I mean certainty is an emotion. ‘I am certain that something is the case’ means: ‘I do not know’. […] We cannot pretend to impose anything on others; we have to create domains of interrogativity.”

Disregarding these reflections, Sergio Bertolucci (CERN) found the peer review system among the physicists’ community a sufficient scholarly control. He refuted all the disputed risks with the “cosmic ray argument,” arguing that much more energetic collisions are naturally taking place in the atmosphere without any adverse effect. This safety argument by CERN on the LHC, however, can also be criticized under different perspectives, for example: Very high energetic collisions could be measured only indirectly — and the collision frequency under the unprecedented artificial and extreme conditions at the LHC is of astronomical magnitudes higher than in the Earth’s atmosphere and anywhere else in the nearer cosmos.

The second presentation of the “Origin” Symposium III was held by Roger Malina, an astrophysicist and the editor of “Leonardo” (MIT Press), a leading academic journal for the arts, sciences and technology.

Malina opened with a disturbing fact: “95% of the universe is of an unknown nature, dark matter and dark energy. We sort of know how it behaves. But we don’t have a clue of what it is. It does not emit light, it does not reflect light. As an astronomer this is a little bit humbling. We have been looking at the sky for millions of years trying to explain what is going on. And after all of that and all those instruments, we understand only 3% of it. A really humbling thought. […] We are the decoration in the universe. […] And so the conclusion that I’d like to draw is that: We are really badly designed to understand the universe.”

The main problem in research is: “curiosity is not neutral.” When astrophysics reaches its limits, cooperation between arts and science may indeed be fruitful for various reasons and could perhaps lead to better science in the end. In a later communication Roger Malina confirmed that the same can be demonstrated for the relation between natural sciences and humanities or social sciences.

However, the astronomer emphasized that an “art-science collaboration can lead to better science in some cases. It also leads to different science, because by embedding science in the larger society, I think the answer was wrong this morning about scientists peer-reviewing themselves. I think society needs to peer-review itself and to do that you need to embed science differently in society at large, and that means cultural embedding and appropriation. Helga Nowotny at the European Research Council calls this ‘socially robust science’. The fact that CERN did not lead to a black hole that ended the world was not due to peer-review by scientists. It was not due to that process.”

One of Malina’s main arguments focused on differences in “the ethics of curiosity”. The best ethics in (natural) science include notions like: intellectual honesty, integrity, organized scepticism, dis-interestedness, impersonality, universality. “Those are the believe systems of most scientists. And there is a fundamental flaw to that. And Humberto this morning really expanded on some of that. The problem is: Curiosity is embodied. You cannot make it into a neutral ideal of scientific curiosity. And here I got a quote of Humberto’s colleague Varela: “All knowledge is conditioned by the structure of the knower.”

In conclusion, a better co-operation of various sciences and skills is urgently necessary, because: “Artists asks questions that scientists would not normally ask. Finally, why we want more art-science interaction is because we don’t have a choice. There are certain problems in our society today that are so tough we need to change our culture to resolve them. Climate change: we’ve got to couple the science and technology to the way we live. That’s a cultural problem, and we need artists working on that with the scientists every day of the next decade, the next century, if we survive it.

Then Roger Malina directly turned to the LHC safety discussion and articulated an open contradiction to the safety assurance pointed out before: He would generally hope for a much more open process concerning the LHC safety debate, rather than discussing this only in a narrow field of particle physics, concrete: “There are certain problems where we cannot cloister the scientific activity in the scientific world, and I think we really need to break the model. I wish CERN, when they had been discussing the risks, had done that in an open societal context, and not just within the CERN context.”

Presently CERN is holding its annual meeting in Chamonix to fix LHC’s 2012 schedules in order to increase luminosity by a factor of four for maybe finally finding the Higgs Boson – against a 100-Dollar bet of Stephen Hawking who is convinced of Micro Black Holes being observed instead, immediately decaying by hypothetical “Hawking Radiation” — with God Particle’s blessing. Then it would be himself gaining the Nobel Prize Hawking pointed out. Quite ironically, at Ars Electronica official T-Shirts were sold with the “typical signature” of a micro black hole decaying at the LHC – by a totally hypothetical process involving a bunch of unproven assumptions.

In 2013 CERN plans to adapt the LHC due to construction failures for up to CHF 1 Billion to run the “Big Bang Machine” at double the present energies. A neutral and multi-disciplinary risk assessment is still lacking, while a couple of scientists insist that their theories pointing at even global risks have not been invalidated. CERN’s last safety assurance comparing natural cosmic rays hitting the Earth with the LHC experiment is only valid under rather narrow viewpoints. The relatively young analyses of high energetic cosmic rays are based on indirect measurements and calculations. Sort, velocity, mass and origin of these particles are unknown. But, taking the relations for granted and calculating with the “assuring” figures given by CERN PR, within ten years of operation, the LHC under extreme and unprecedented artificial circumstances would produce as many high energetic particle collisions as occur in about 100.000 years in the entire atmosphere of the Earth. Just to illustrate the energetic potential of the gigantic facility: One LHC-beam, thinner than a hair, consisting of billions of protons, has got the power of an aircraft carrier moving at 12 knots.

This article in the Physics arXiv Blog (MIT’s Technology Review) reads: “Black Holes, Safety, and the LHC Upgrade — If the LHC is to be upgraded, safety should be a central part of the plans.”, closing with the claim: “What’s needed, of course, is for the safety of the LHC to be investigated by an independent team of scientists with a strong background in risk analysis but with no professional or financial links to CERN.”
http://www.technologyreview.com/blog/arxiv/27319/

Australian ethicist and risk researcher Mark Leggett concluded in a paper that CERN’s LSAG safety report on the LHC meets less than a fifth of the criteria of a modern risk assessment. There but for the grace of a goddamn particle? Probably not. Before pushing the LHC to its limits, CERN must be challenged by a really neutral, external and multi-disciplinary risk assessment.

Video recordings of the “Origin III” symposium at Ars Electronica:
Presentation Humberto Maturana:

Presentation Roger Malina:

“Origin” Symposia at Ars Electronica:
http://www.aec.at/origin/category/conferences/

Communication on LHC Safety directed to CERN
Feb 10 2012
For a neutral and multidisciplinary risk assessment to be done before any LHC upgrade
http://lhc-concern.info/?page_id=139

More info, links and transcripts of lectures at “LHC-Critique — Network for Safety at experimental sub-nuclear Reactors”:

www.LHC-concern.info

It is of course widely accepted that the Greenland icesheet is melting at an alarming rate, accelerating, and is an irreversible process, and when it finally does melt will contribute to a rise in sea levels globally by 7 meters. This is discounting the contribution of any melt from the West Antarctic ice sheet which could contribute a further 5 meters, and the more long term risk of East Antarctic ice sheet melt, which is losing mass at a rate of 57 billion tonnes per year, and if melted in entirety would see sea levels rise by a further 60 meters.

In this light it is rather ‘cute’ that the site here dedicated to existential risks to society is called the Lifeboat Foundation when one of our less discussed risks is that of world-wide flooding of a massive scale to major coastal cities/ports & industries right across the world.

Why do we still continue to grow our cities below a safe limit of say 10 meters above sea level when cities are built to last thousands of years, but could now be flooded within hundreds. How many times do we have to witness disaster scenarios such as the Oklahoma City floods before we contemplate this occurring irreversibly to hundreds of cities across the world in the future. Is it feasible to take the approach of building large dams to preserve these cities, or is it a case of eventually evacuating and starting all over again? In the latter case, how do we safely contain chemical & nuclear plants that would need to be abandoned in a responsible and non-environmentally damaging procedure?

Let’s be optimistic here — the Antarctic ice sheets are unlikely to disappear in time scales we need to worry about today — but the Greenland ice sheet is topical. Can it be considered an existential risk if the process takes hundreds of years and we can slowly step out of the way though so much of the infrastructure we rely on is being relinquished? Will we just gradually abandon our cities to higher ground as insurance companies refuse to cover properties in coastal flooding areas? Or will we rise to a challenge and take first steps to create eco-bubbles & ever larger dams to protect cities?

I would like to hear others thoughts on this topic of discussion here - particularly if anyone feels that the Greenland ice sheet situation is reversible…

I wouldn’t have paid much attention to the following topic except for the article appearing in an otherwise credible international news agency (MINA).

http://macedoniaonline.eu/content/view/17115/56/
http://wiki.answers.com/Q/What_is_the_gulf_of_aden_vortex

Whilst electro-magnetic disturbances occur naturally — all the time, the suggestion that one in particular has allegedly arose through industrial practices (ionospheric research, wormhole research(??)) lends to curiosity. If anyone on one of the advisory boards for the various science disciplines has a strong knowledge of electro-magnetic vortex type features that can occur in nature, please explain the phenomena, whether there are any implications of these and whether industry of any sort (in particular directed ionospheric heating) can cause such anomalies to appear from time to time.

I understand that there can be certain fluctuations and weakening in build up to magnetic pole reversals, for example (though please correct me if I’m wrong here). That besides one may enjoy the alleged reaction of certain defense forces (surely spoof) which is at least good satire on how leaders of men can often fear the unknown.

I am taking the advice of a reader of this blog and devoting part 2 to examples of old school and modern movies and the visionary science they portray.

Things to Come 1936 — Event Horizon 1997
Things to Come was a disappointment to Wells and Event Horizon was no less a disappointment to audiences. I found them both very interesting as a showcase for some technology and social challenges.… to come- but a little off the mark in regards to the exact technology and explicit social issues. In the final scene of Things to Come, Raymond Massey asks if mankind will choose the stars. What will we choose? I find this moment very powerful- perhaps the example; the most eloguent expression of the whole genre of science fiction. Event Horizon was a complete counterpoint; a horror movie set in space with a starship modeled after a gothic cathedral. Event Horizon had a rescue crew put in stasis for a high G several month journey to Neptune on a fusion powered spaceship. High accelleration and fusion brings H-bombs to mind, and though not portrayed, this propulsion system is in fact a most probable future. Fusion “engines” are old hat in sci-fi despite the near certainty the only places fusion will ever work as advertised are in a bomb or a star. The Event Horizon, haunted and consigned to hell, used a “gravity drive” to achieve star travel by “folding space.” Interestingly, a recent concept for a black hole powered starship is probably the most accurate forecast of the technology that will be used for interstellar travel in the next century. While ripping a hole in the fabric of space time may be strictly science fantasy, for the next thousand years at least, small singularity propulsion using Hawking radiation to achieve a high fraction of the speed of light is mathematically sound and the most obvious future.

https://lifeboat.com/blog/2012/09/only-one-star-drive-can-work-so-far

That is, if humanity avoids an outbreak of engineered pathogens or any one of several other threats to our existence in that time frame.

Hand in hand with any practical method of journeys to other star systems in the concept of the “sleeper ship.” Not only as inevitable as the submarine or powered flight was in the past, the idea of putting human beings in cold storage would bring tremendous changes to society. Suspended animation using a cryopreservation procedure is by far the most radical and important global event possible, and perhpas probable, in the near future. The ramifications of a revivable whole body cryopreservation procedure are truly incredible. Cryopreservation would be the most important event in the history of mankind. Future generations would certainly mark it as the beginning of “modern” civilization. Though not taken seriously anymore than the possiblility of personal computers were, the advances in medical technology make any movies depicting suspended animation quite prophetic.

The Thing 1951/Them 1954 — Deep Impact 1998/Armegeddon 1998
These four movies were essentially about the same.…thing. Whether a space vampire not from earth in the arctic, mutated super organisms underneath the earth, or a big whatever in outer space on a collision course with earth, the subject was a monstrous threat to our world, the end of humankind on earth being the common theme. The lifeboat blog is about such threats and the The Thing and Them would also appeal to any fan of Barbara Ehrenreich’s book, Blood Rites. It is interesting that while we appreciate in a personal way what it means to face monsters or the supernatural, we just do not “get” the much greater threats only recently revealed by impact craters like Chixculub. In this way these movies dealing with instinctive and non-instinctive realized threats have an important relationship to each other. And this connection extends to the more modern sci-fi creature features of past decades. Just how much the The Thing and Them contributed to the greatest military sci-fi movie of the 20th century (Aliens, of course) will probably never be known. Director James Cameron once paid several million dollars out of court to sci-fi writer Harlan Ellison after admitting during an interview to using Ellison’s work- so he will not be making that mistake again. The second and third place honors, Starship Troopers and Predator, were both efforts of Dutch Film maker Paul Verhoeven.

While The Thing and Them still play well, and Deep Impact, directed by James Cameron’s ex-wife, is a good flick and has uncanny predictive elements such as a black president and a tidal wave, Armegeddon is worthless. I mention this trash cinema only because it is necessary for comparison and to applaud the 3 minutes when the cryogenic fuel transfer procedure is seen to be the farce that it is in actuality. Only one of the worst movie directors ever, or the space tourism industry, would parade such a bad idea before the public.
Ice Station Zebra 1968 — The Road 2009
Ice Station Zebra was supposedly based on a true incident. This cold war thriller featured Rock Hudson as the penultimate submarine commander and was a favorite of Howard Hughes. By this time a recluse, Hughes purchased a Las Vegas TV station so he could watch the movie over and over. For those who have not seen it, I will not spoil the sabotage sequence, which has never been equaled. I pair Ice Station Zebra and The Road because they make a fine quartet, or rather sixtet, with The Thing/Them and Deep Impact/Armegeddon.

The setting for many of the scenes in these movies are a wasteland of ice, desert, cometoid, or dead forest. While Armegeddon is one of the worst movies ever made on a big budget, The Road must be one of the best on a small budget- if accuracy is a measure of best. The Road was a problem for the studio that produced it and release was delayed due to the reaction of the test audiences. All viewers left the theatre profoundly depressed. It is a shockingly realistic movie and disturbed to the point where I started writing about impact deflection. The connection between Armegeddon and The Road, two movies so different, is the threat and aftermath of an asteroid or comet impact. While The Road never specifies an impact as the disaster that ravaged the planet, it fits the story perfectly. Armegeddon has a few accurate statements about impacts mixed in with ludicrous plot devices that make the story a bad experience for anyone concerned with planetary protection. It seems almost blasphemous and positively criminal to make such a juvenile for profit enterprise out of an inevitable event that is as serious as serious gets. Do not watch it. Ice Station Zebra, on the other hand, is a must see and is in essence a showcase of the only tools available to prevent The Road from becoming reality. Nuclear weapons and space craft- the very technologies that so many feared would destroy mankind, are the only hope to save the human race in the event of an impending impact.

Part 3:
Gog 1954 — Stealth 2005
Fantastic Voyage 1966 — The Abyss 1989
And notable moments in miscellaneous movies.

Steamships, locomotives, electricity; these marvels of the industrial age sparked the imagination of futurists such as Jules Verne. Perhaps no other writer or work inspired so many to reach the stars as did this Frenchman’s famous tale of space travel. Later developments in microbiology, chemistry, and astronomy would inspire H.G. Wells and the notable science fiction authors of the early 20th century.

The submarine, aircraft, the spaceship, time travel, nuclear weapons, and even stealth technology were all predicted in some form by science fiction writers many decades before they were realized. The writers were not simply making up such wonders from fanciful thought or childrens ryhmes. As science advanced in the mid 19th and early 20th century, the probable future developments this new knowledge would bring about were in some cases quite obvious. Though powered flight seems a recent miracle, it was long expected as hydrogen balloons and parachutes had been around for over a century and steam propulsion went through a long gestation before ships and trains were driven by the new engines. Solid rockets were ancient and even multiple stages to increase altitude had been in use by fireworks makers for a very long time before the space age.

Some predictions were seen to come about in ways far removed yet still connected to their fictional counterparts. The U.S. Navy flagged steam driven Nautilus swam the ocean blue under nuclear power not long before rockets took men to the moon. While Verne predicted an electric submarine, his notional Florida space gun never did take three men into space. However there was a Canadian weapons designer named Gerald Bull who met his end while trying to build such a gun for Saddam Hussien. The insane Invisible Man of Wells took the form of invisible aircraft playing a less than human role in the insane game of mutually assured destruction. And a true time machine was found easily enough in the mathematics of Einstein. Simply going fast enough through space will take a human being millions of years into the future. However, traveling back in time is still as much an impossibillity as the anti-gravity Cavorite from the First Men in the Moon. Wells missed on occasion but was not far off with his story of alien invaders defeated by germs- except we are the aliens invading the natural world’s ecosystem with our genetically modified creations and could very well soon meet our end as a result.

While Verne’s Captain Nemo made war on the death merchants of his world with a submarine ram, our own more modern anti-war device was found in the hydrogen bomb. So destructive an agent that no new world war has been possible since nuclear weapons were stockpiled in the second half of the last century. Neither Verne or Wells imagined the destructive power of a single missile submarine able to incinerate all the major cities of earth. The dozens of such superdreadnoughts even now cruising in the icy darkness of the deep ocean proves that truth is more often stranger than fiction. It may seem the golden age of predictive fiction has passed as exceptions to the laws of physics prove impossible despite advertisments to the contrary. Science fiction has given way to science fantasy and the suspension of disbelief possible in the last century has turned to disappointment and the distractions of whimsical technological fairy tales. “Beam me up” was simply a way to cut production costs for special effects and warp drive the only trick that would make a one hour episode work. Unobtainium and wishalloy, handwavium and technobabble- it has watered down what our future could be into childish wish fulfillment and escapism.

The triumvirate of the original visionary authors of the last two centuries is completed with E.E. Doc Smith. With this less famous author the line between predictive fiction and science fantasy was first truly crossed and the new genre of “Space Opera” most fully realized. The film industry has taken Space Opera and run with it in the Star Wars franchise and the works of Canadian film maker James Cameron. Though of course quite entertaining, these movies showcase all that is magical and fantastical- and wrong- concerning science fiction as a predictor of the future. The collective imagination of the public has now been conditioned to violate the reality of what is possible through the violent maiming of basic scientific tenets. This artistic license was something Verne at least tried not to resort to, Wells trespassed upon more frequently, and Smith indulged in without reservation. Just as Madonna found the secret to millions by shocking a jaded audience into pouring money into her bloomers, the formula for ripping off the future has been discovered in the lowest kind of sensationalism. One need only attend a viewing of the latest Transformer movie or download Battlestar Galactica to appreciate that the entertainment industry has cashed in on the ignorance of a poorly educated society by selling intellect decaying brain candy. It is cowboys vs. aliens and has nothing of value to contribute to our culture…well, on second thought, I did get watery eyed when the young man died in Harrison Ford’s arms. I am in no way criticizing the profession of acting and value the talent of these artists- it is rather the greed that corrupts the ancient art of storytelling I am unhappy with. Directors are not directors unless they make money and I feel sorry that these incredibly creative people find themselves less than free to pursue their craft.

The archetype of the modern science fiction movie was 2001 and like many legendary screen epics, a Space Odyssey was not as original as the marketing made it out to be. In an act of cinema cold war many elements were lifted from a Soviet movie. Even though the fantasy element was restricted to a single device in the form of an alien monolith, every artifice of this film has so far proven non-predictive. Interestingly, the propulsion system of the spaceship in 2001 was originally going to use atomic bombs, which are still, a half century later, the only practical means of interplanetary travel. Stanly Kubrick, fresh from Dr. Strangelove, was tired of nukes and passed on portraying this obvious future.

As with the submarine, airplane, and nuclear energy, the technology to come may be predicted with some accuracy if the laws of physics are not insulted but rather just rudely addressed. Though in some cases, the line is crossed and what is rude turns disgusting. A recent proposal for a “NautilusX” spacecraft is one example of a completely vulgar denial of reality. Chemically propelled, with little radiation shielding, and exhibiting a ridiculous doughnut centrifuge, such advertising vehicles are far more dishonest than cinematic fabrications in that they decieve the public without the excuse of entertaining them. In the same vein, space tourism is presented as space exploration when in fact the obscene spending habits of the ultra-wealthy have nothing to do with exploration and everything to do with the attendent taxpayer subsidized business plan. There is nothing to explore in Low Earth Orbit except the joys of zero G bordellos. Rudely undressing by way of the profit motive is followed by a rude address to physics when the key private space scheme for “exploration” is exposed. This supposed key is a false promise of things to come.

While very large and very expensive Heavy Lift Rockets have been proven to be successful in escaping earth’s gravitational field with human passengers, the inferior lift vehicles being marketed as “cheap access to space” are in truth cheap and nasty taxis to space stations going in endless circles. The flim flam investors are basing their hopes of big profit on cryogenic fuel depots and transfer in space. Like the filling station every red blooded American stops at to fill his personal spaceship with fossil fuel, depots are the solution to all the holes in the private space plan for “commercial space.” Unfortunately, storing and transferring hydrogen as a liquified gas a few degrees above absolute zero in a zero G environment has nothing in common with filling a car with gasoline. It will never work as advertised. It is a trick. A way to get those bordellos in orbit courtesy of taxpayer dollars. What a deal.

So what is the obvious future that our present level of knowledge presents to us when entertaining the possible and the impossible? More to come.

Transhumanists are into improvements, and many talk about specific problems, for instance Nick Bostrom. However, Bostrom’s problem statements have been criticized for not necessarily being problems, and I think largely this is why one must consider the problem definition (see step #2 below).

Sometimes people talk about their “solutions” for problems, for instance this one in H+ Magazine. But in many cases they are actually talking about their ideas of how to solve a problem, or making science-fictional predictions. So if you surf the web, you will find a lot of good ideas about possibly important problems—but a lot of what you find will be undefined (or not very well defined) problem ideas and solutions.

These proposed solutions often do not attempt to find root causes or assume the wrong root cause. And finding a realistic complete plan for solving a problem is rare.

8D (Eight Disciplines) is a process used in various industries for problem solving and process improvement. The 8D steps described below could be very useful for transhumanists, not just for talking about problems but for actually implementing solutions in real life.

Transhuman concerns are complex not just technologically, but also socioculturally. Some problems are more than just “a” problem—they are a dynamic system of problems and the process for problem solving itself is not enough. There has to be management, goals, etc., most of which is outside the scope of this article. But first one should know how deal with a single problem before scaling up, and 8D is a process that can be used on a huge variety of complex problems.

Here are the eight steps of 8D:

  1. Assemble the team
  2. Define the problem
  3. Contain the problem
  4. Root cause analysis
  5. Choose the permanent solution
  6. Implement the solution and verify it
  7. Prevent recurrence
  8. Congratulate the team

More detailed descriptions:

1. Assemble the Team

Are we prepared for this?

With an initial, rough concept of the problem, a team should be assembled to continue the 8D steps. The team will make an initial problem statement without presupposing a solution. They should attempt to define the “gap” (or error)—the big difference between the current problematic situation and the potential fixed situation. The team members should all be interested in closing this gap.

The team must have a leader; this leader makes agendas, synchronizes actions and communications, resolves conflicts, etc. In a company, the team should also have a “sponsor”, who is like a coach from upper management. The rest of the team is assembled as appropriate; this will vary depending on the problem, but some general rules for a candidate can be:

  • Has a unique point of view.
  • Logistically able to coordinate with the rest of the team.
  • Is not committed to preconceived notions of “the answer.”
  • Can actually accomplish change that they might be responsible for.

The size of an 8D team (at least in companies) is typically 5 to 7 people.

The team should be justified. This matters most within an organization that is paying for the team, however even a group of transhumanists out in the wilds of cyberspace will have to defend themselves when people ask, “Why should we care?”

2. Define the Problem

What is the problem here?

Let’s say somebody throws my robot out of an airplane, and it immediately falls to the ground and breaks into several pieces. This customer then informs me that this robot has a major problem when flying after being dropped from a plane and that I should improve the flying software to fix it.

Here is the mistake: The problem has not been properly defined. The robot is a ground robot and was not intended to fly or be dropped out of a plane. The real problem is that a customer has been misinformed as to the purpose and use of the product.

When thinking about how to improve humanity, or even how to merely improve a gadget, you should consider: Have you made an assumption about the issue that might be obscuring the true problem? Did the problem emerge from a process that was working fine before? What processes will be impacted? If this is an improvement, can it be measured, and what is the expected goal?

The team should attempt to grok the issues and their magnitude. Ideally, they will be informed with data, not just opinions.

Just as with medical diagnosis, the symptoms alone are probably not enough input. There are various ways to collect more data, and which methods you use depends on the nature of the problem. For example, one method is the 5 W’s and 2 H’s:

  • Who is affected?
  • What is happening?
  • When does it occur?
  • Where does it happen?
  • Why is it happening (initial understanding)?
  • How is it happening?
  • How many are affected?

For humanity-affecting problems, I think it’s very important to define what the context of the problem is.

3. Contain the Problem

Containment

Some problems are urgent, and a stopgap must be put in place while the problem is being analyzed. This is particularly relevant for problems such as product defects which affect customers.

Some brainstorming questions are:

  • Can anything be done to mitigate the negative impact (if any) that is happening?
  • Who would have to be involved with that mitigation?
  • How will the team know that the containment action worked?

Before deploying an interim expedient, the team should have asked and answered these questions (they essentially define the containment action):

  • Who will do it?
  • What is the task?
  • When will it be accomplished?

A canonical example: You have a leaky roof (the problem). The containment action is to put a pail underneath the hole to capture the leaking water. This is a temporary fix until the roof is properly repaired, and mitigates damage to the floor.

Don’t let the bucket of water example fool you—containment can be massive, e.g. corporate bailouts. Of course, the team must choose carefully: Is the cost of containment worth it?

4. Root Cause Analysis

There can be many layers of causation

Whenever you think you have an answer to a problem, as yourself: Have you gone deep enough? Or is there another layer below? If you implementt a fix, will the problem grow back?

Generally in the real world events are causal. The point of root cause analysis is to trace the causes all the way back for your problem. If you don’t find the origin of the causes, then the problem will probably rear its ugly head again.

Root cause analysis is one of the most overlooked, yet important, steps of problem solving. Even engineers often lose their way when solving a problem and jump right into a fix which later on turned out to be a red herring.

Typically, driving to root cause follows one of these two routes:

  1. Start with data; develop theories from that data.
  2. Start with a theory; search for data to support or refute it.

Either way, team members must always remember keep in mind that correlation is not necessarily causation.

One tool to use is the 5 Why’s, in which you move down the “ladder of abstraction” by continually asking: “why?” Start with a cause and ask why this cause is responsible for the gap (or error). Then ask again until you’ve bottomed out with something that may be a true root cause.

There are many other general purpose methods and tools to assist in this stage; I will list some of them here, but please look them up for detailed explanations:

  • Brainstorming: Generate as many ideas as possible, and elaborate on the best ideas.
  • Process flow analysis: Flowchart a process; attempt to narrow down what element in the flow chart is causing the problem.
  • Fishikawa: Use a Fishikawa (aka Cause and Effect) diagram to try narrowing down the cause(s).
  • Pareto analysis: Generate a Pareto chart, which may indicate which cause (of many) should be fixed first.
  • Data analysis: Use trend charts, scatter plots, etc. to assist in finding correlations and trends.

And that is just the beginning—a problem may need a specific new experiment or data collection method devised.

Ideally you would have a single root cause, but that is not always the case.

The team should also come up with various correction actions that solve the root cause, to be selected and refined in the next step.

5. Choose the Permanent Solution

The solution must be one or more corrective actions that solve the cause(s) of the problem. Corrective action selection is additionally guided by criteria such as time constraints, money constraints, efficiency, etc.

This is a great time to simulate/test the solution, if possible. There might be unaccounted for side effects either in the system you fixed or in related systems. This is especially true for some of the major issues that transhumanists wish to tackle.

You must verify that the corrective action(s) will in fact fix the root cause and not cause bad side effects.

6. Implement the Solution and Verify It

This is the stage when the team actually sets into motion the correction action(s). But doing it isn’t enough—the team also has to check to see if the solution is really working.

For some issues the verification is clean-cut. Some corrective actions have to be evaluated with effectiveness, for instance some benchmark. Depending on the time scale of the corrective action, the team might need to add various monitors and/or controls to continually make sure the root cause is squashed.

7. Prevent Recurrence

It’s possible that a process will revert back to its old ways after the problem has been solved, resulting in the same type of problem happening again. So the team should provide the organization or environment with improvements to processes, procedures, practices, etc. so that this type of problem does not resurface.

8. Congratulate the Team

Party time! The team should share and publicize the knowledge gained from the process as it will help future efforts and teams.

Image credits:
1. Inception (2010), Warner Bros.
2. Peter Galvin
3. Tom Parnell
4. shalawesome