Toggle light / dark theme

- CERN’s annual meeting to fix LHC schedules in Chamonix: Increasing energies. No external and multi-disciplinary risk assessment so far. Future plans targeting at costly LHC upgrade in 2013 and Mega-LHC in 2022.

- COMMUNICATION to CERN – For a neutral and multi-disciplinary risk assessment before any LHC upgrade

According to CERN’s Chamonix workshop (Feb. 6–10 2012) and a press release from today: In 2012 the collision energies of the world’s biggest particle collider LHC should be increased from 3.5 to 4 TeV per beam and the luminosity is planned to be increased by a factor of 3. This means much more particle collisions at higher energies.

CERN plans to shut down the LHC in 2013 for about 20 months to do a very costly upgrade (for CHF 1 Billion?) to run the LHC at double the present energies (7 TeV per beam) afterwards.

Future plans: A High-Luminosity LHC (HL-LHC) is planned, “tentatively scheduled to start operating around 2022” — with a beam energy increased from 7 to 16.5 TeV(!):
http://cdsweb.cern.ch/journal/CERNBulletin/2012/06/News%20Articles/1423292?ln=en

One might really ask where this should lead to – sooner or later – without the risks being properly investigated. Many critics from different fields are severely alarmed.

For comparison: The AMS 2 experiment for directly measuring cosmic rays in the atmosphere operates on a scale around 1.5 TeV. Very high energetic cosmic rays have only been measured indirectly (their impulse). Sort, velocity, mass and origin of these particles are unknown. In any way, the number of collisions under the extreme and unprecedented artificial conditions at the LHC is of astronomical magnitudes higher than anywhere else in the nearer cosmos.

There were many talks on machine safety at the Chamonix meeting. The safety of humans and environment obviously were not an official topic. That’s why critics turned to CERN in an open letter:

———————————————————–
Communication on LHC Safety directed to CERN

For a neutral and multidisciplinary risk assessment to be done before any LHC upgrade

—————————-
Communiqué to CERN
—————————-

Dear management and scientists at CERN,

Astronomer and Leonardo-publisher Roger Malina recently emphasized that the main problem in research is that “curiosity is not neutral”. And he concluded: “There are certain problems where we cannot cloister the scientific activity in the scientific world, and I think we really need to break the model. I wish CERN, when they had been discussing the risks, had done that in an open societal context, and not just within the CERN context.”

Video of Roger Malina’s presentation at Ars Electronica, following prominent philosopher and leading constructivist Humberto Maturana’s remarkable lecture on science and “certainy”: http://www.youtube.com/watch?v=DOZS2qJrVkU

In the eyes of many critics a number of questions related to LHC safety are not ruled out and some of them have concrete and severe concerns. Also the comparability of the cosmic ray argument is challenged.

Australian risk researcher and ethicist Mark Leggett concludes in a paper that CERN meets less than a fifth of the criteria of a modern risk assessment:
http://lhc-concern.info/wp-content/uploads/2009/09/leggett_review_of_lsag_process_sept_1__09.pdf

Without getting into details of the LHC safety discussion – this article in the well-recognized Physics arXiv Blog (MIT’s Technology Review) states: “Black Holes, Safety, and the LHC Upgrade — If the LHC is to be upgraded, safety should be a central part of the plans.”

Similar to pragmatic critics, the author claims in his closing remarks: “What’s needed, of course, is for the safety of the LHC to be investigated by an independent team of scientists with a strong background in risk analysis but with no professional or financial links to CERN.”
http://www.technologyreview.com/blog/arxiv/27319/

The renowned Institute for Technology Assessment and Systems Analysis (ITAS) in Karlsruhe and other risk researchers have already signalized interest in cooperation. We think, in such a process, naturally also CERN and critics should be constructively involved.

Please act in favour of such a neutral and multi-disciplinary assessment, maybe already following the present Chamonix meeting. Even if you feel sure that there are no reasons for any concerns, this must be in your interest, while also being of scientific and public concern.

In the name of many others:
[…]
————————–
LHC-Kritik / LHC-Critique
www.LHC-concern.info

Direct link to this Communication to CERN:
http://lhc-concern.info/?page_id=139
Also published in “oekonews”: http://www.oekonews.at/index.php?mdoc_id=1067776

CERN press release from Feb 13 2012:
http://press.web.cern.ch/press/PressReleases/Releases2012/PR01.12E.html

“Badly designed to understand the Universe — CERN’s LHC in critical Reflection by great Philosopher H. Maturana and Astrophysicist R. Malina”:
https://lifeboat.com/blog/2012/02/badly-designed-to-understand-the-universe-cerns-lhc-in-critical-reflection-by-great-philosopher-h-maturana-and-astrophysicist-r-malina

“LHC-Kritik/LHC-Critique – Network for Safety at experimental sub-nuclear Reactors”, is a platform articulating the risks related to particle colliders and experimental high energy physics. LHC-Critique has conducted a number of detailed papers demonstrating the insufficiency of the present safety measures under well understandable perspectives and has still got a law suit pending at the European Court of Human Rights.

More info at LHC-Kritik / LHC-Critique:
www.LHC-concern.info
[email protected]
+43 650 629 627 5

Famous Chilean philosopher Humberto Maturana describes “certainty” in science as subjective emotional opinion and astonishes the physicists’ prominence. French astronomer and “Leonardo” publisher Roger Malina hopes that the LHC safety issue would be discussed in a broader social context and not only in the closer scientific framework of CERN.

(Article published in “oekonews”: http://oekonews.at/index.php?mdoc_id=1067777 )

The latest renowned “Ars Electronica Festival” in Linz (Austria) was dedicated in part to an uncritical worship of the gigantic particle accelerator LHC (Large Hadron Collider) at the European Nuclear Research Center CERN located at the Franco-Swiss border. CERN in turn promoted an art prize with the idea to “cooperate closely” with the arts. This time the objections were of a philosophical nature – and they had what it takes.

In a thought provoking presentation Maturana addressed the limits of our knowledge and the intersubjective foundations of what we call “objective” and “reality.” His talk was spiked with excellent remarks and witty asides that contributed much to the accessibility of these fundamental philosophical problems: “Be realistic, be objective!” Maturana pointed out, simply means that we want others to adopt our point of view. The great constructivist and founder of the concept of autopoiesis clearly distinguished his approach from a solipsistic position.

Given Ars Electronica’s spotlight on CERN and its experimental sub-nuclear research reactor, Maturana’s explanations were especially important, which to the assembled CERN celebrities may have come in a mixture of an unpleasant surprise and a lack of relation to them.

During the question-and-answer period, Markus Goritschnig asked Maturana whether it wasn’t problematic that CERN is basically controlling itself and discarding a number of existential risks discussed related to the LHC — including hypothetical but mathematically demonstrable risks also raised — and later downplayed — by physicists like Nobel Prize winner Frank Wilczek, and whether he thought it necessary to integrate in the LHC safety assessment process other sciences aside from physics such as risk search. In response Maturana replied (in the video from about 1:17): “We human beings can always reflect on what we are doing and choose. And choose to do it or not to do it. And so the question is, how are we scientists reflecting upon what we do? Are we taking seriously our responsibility of what we do? […] We are always in the danger of thinking that, ‘Oh, I have the truth’, I mean — in a culture of truth, in a culture of certainty — because truth and certainty are not as we think — I mean certainty is an emotion. ‘I am certain that something is the case’ means: ‘I do not know’. […] We cannot pretend to impose anything on others; we have to create domains of interrogativity.”

Disregarding these reflections, Sergio Bertolucci (CERN) found the peer review system among the physicists’ community a sufficient scholarly control. He refuted all the disputed risks with the “cosmic ray argument,” arguing that much more energetic collisions are naturally taking place in the atmosphere without any adverse effect. This safety argument by CERN on the LHC, however, can also be criticized under different perspectives, for example: Very high energetic collisions could be measured only indirectly — and the collision frequency under the unprecedented artificial and extreme conditions at the LHC is of astronomical magnitudes higher than in the Earth’s atmosphere and anywhere else in the nearer cosmos.

The second presentation of the “Origin” Symposium III was held by Roger Malina, an astrophysicist and the editor of “Leonardo” (MIT Press), a leading academic journal for the arts, sciences and technology.

Malina opened with a disturbing fact: “95% of the universe is of an unknown nature, dark matter and dark energy. We sort of know how it behaves. But we don’t have a clue of what it is. It does not emit light, it does not reflect light. As an astronomer this is a little bit humbling. We have been looking at the sky for millions of years trying to explain what is going on. And after all of that and all those instruments, we understand only 3% of it. A really humbling thought. […] We are the decoration in the universe. […] And so the conclusion that I’d like to draw is that: We are really badly designed to understand the universe.”

The main problem in research is: “curiosity is not neutral.” When astrophysics reaches its limits, cooperation between arts and science may indeed be fruitful for various reasons and could perhaps lead to better science in the end. In a later communication Roger Malina confirmed that the same can be demonstrated for the relation between natural sciences and humanities or social sciences.

However, the astronomer emphasized that an “art-science collaboration can lead to better science in some cases. It also leads to different science, because by embedding science in the larger society, I think the answer was wrong this morning about scientists peer-reviewing themselves. I think society needs to peer-review itself and to do that you need to embed science differently in society at large, and that means cultural embedding and appropriation. Helga Nowotny at the European Research Council calls this ‘socially robust science’. The fact that CERN did not lead to a black hole that ended the world was not due to peer-review by scientists. It was not due to that process.”

One of Malina’s main arguments focused on differences in “the ethics of curiosity”. The best ethics in (natural) science include notions like: intellectual honesty, integrity, organized scepticism, dis-interestedness, impersonality, universality. “Those are the believe systems of most scientists. And there is a fundamental flaw to that. And Humberto this morning really expanded on some of that. The problem is: Curiosity is embodied. You cannot make it into a neutral ideal of scientific curiosity. And here I got a quote of Humberto’s colleague Varela: “All knowledge is conditioned by the structure of the knower.”

In conclusion, a better co-operation of various sciences and skills is urgently necessary, because: “Artists asks questions that scientists would not normally ask. Finally, why we want more art-science interaction is because we don’t have a choice. There are certain problems in our society today that are so tough we need to change our culture to resolve them. Climate change: we’ve got to couple the science and technology to the way we live. That’s a cultural problem, and we need artists working on that with the scientists every day of the next decade, the next century, if we survive it.

Then Roger Malina directly turned to the LHC safety discussion and articulated an open contradiction to the safety assurance pointed out before: He would generally hope for a much more open process concerning the LHC safety debate, rather than discussing this only in a narrow field of particle physics, concrete: “There are certain problems where we cannot cloister the scientific activity in the scientific world, and I think we really need to break the model. I wish CERN, when they had been discussing the risks, had done that in an open societal context, and not just within the CERN context.”

Presently CERN is holding its annual meeting in Chamonix to fix LHC’s 2012 schedules in order to increase luminosity by a factor of four for maybe finally finding the Higgs Boson – against a 100-Dollar bet of Stephen Hawking who is convinced of Micro Black Holes being observed instead, immediately decaying by hypothetical “Hawking Radiation” — with God Particle’s blessing. Then it would be himself gaining the Nobel Prize Hawking pointed out. Quite ironically, at Ars Electronica official T-Shirts were sold with the “typical signature” of a micro black hole decaying at the LHC – by a totally hypothetical process involving a bunch of unproven assumptions.

In 2013 CERN plans to adapt the LHC due to construction failures for up to CHF 1 Billion to run the “Big Bang Machine” at double the present energies. A neutral and multi-disciplinary risk assessment is still lacking, while a couple of scientists insist that their theories pointing at even global risks have not been invalidated. CERN’s last safety assurance comparing natural cosmic rays hitting the Earth with the LHC experiment is only valid under rather narrow viewpoints. The relatively young analyses of high energetic cosmic rays are based on indirect measurements and calculations. Sort, velocity, mass and origin of these particles are unknown. But, taking the relations for granted and calculating with the “assuring” figures given by CERN PR, within ten years of operation, the LHC under extreme and unprecedented artificial circumstances would produce as many high energetic particle collisions as occur in about 100.000 years in the entire atmosphere of the Earth. Just to illustrate the energetic potential of the gigantic facility: One LHC-beam, thinner than a hair, consisting of billions of protons, has got the power of an aircraft carrier moving at 12 knots.

This article in the Physics arXiv Blog (MIT’s Technology Review) reads: “Black Holes, Safety, and the LHC Upgrade — If the LHC is to be upgraded, safety should be a central part of the plans.”, closing with the claim: “What’s needed, of course, is for the safety of the LHC to be investigated by an independent team of scientists with a strong background in risk analysis but with no professional or financial links to CERN.”
http://www.technologyreview.com/blog/arxiv/27319/

Australian ethicist and risk researcher Mark Leggett concluded in a paper that CERN’s LSAG safety report on the LHC meets less than a fifth of the criteria of a modern risk assessment. There but for the grace of a goddamn particle? Probably not. Before pushing the LHC to its limits, CERN must be challenged by a really neutral, external and multi-disciplinary risk assessment.

Video recordings of the “Origin III” symposium at Ars Electronica:
Presentation Humberto Maturana:

Presentation Roger Malina:

“Origin” Symposia at Ars Electronica:
http://www.aec.at/origin/category/conferences/

Communication on LHC Safety directed to CERN
Feb 10 2012
For a neutral and multidisciplinary risk assessment to be done before any LHC upgrade
http://lhc-concern.info/?page_id=139

More info, links and transcripts of lectures at “LHC-Critique — Network for Safety at experimental sub-nuclear Reactors”:

www.LHC-concern.info

One way that astronomers and astrobiologists search for life in the galaxy is observation of rocky planets orbiting other stars. Such planets may contain an atmosphere, liquid water, and other ingredients that are required for biological life on Earth. Once a number of these potentially inhabited planets have been identified, the next logical step in exploration is to send remote exploratory probes to make direct observations of these planets. Present-day study of other planetary systems is so far limited to remote observation with telescopes, but future plans for exploration include the design and deployment of small robotic exploratory spacecraft toward other star systems.

If intelligent, technological extraterrestrial life exists in the galaxy, then it is conceivable that such a civilization might embark on a similar exploration strategy. Extraterrestrial intelligent (ETI) civilizations may choose to pursue astronomy and search for planets orbiting other star systems and may also choose to follow-up on some of these targets by deploying their own remote exploratory spacecraft. If nearby ETI have observed the Solar System and decided to pursue further exploration, then evidence of ETI technology may be present in the form of such exploratory probes. We refer to this ETI technology as “non-terrestrial artifacts”, in part to distinguish these plausible exploratory spacecraft from the flying saucers of science fiction.

In a recent paper titled “On the likelihood of non-terrestrial artifacts in the Solar System”, published in the journal Acta Astronautica (and available on arXiv.org as a preprint), Jacob Haqq-Misra and Ravi Kopparapu discuss the likelihood that human exploration of the Solar System would have uncovered any non-terrestrial artifacts. Exploratory probes destined for another star system are likely to be relatively small (less than ten meters in diameter), so any non-terrestrial artifacts present in the Solar System have probably remained undetected. The surface and atmosphere of Earth are probably the most comprehensively searched volumes in the Solar System and can probably be considered absent of non-terrestrial artifacts. Likewise, the surface of the moon and portions of Mars have been searched at a sufficient resolution to have uncovered any non-terrestrial artifacts that could have been present. However, the deep oceans of Earth and the subsurface of the Moon are largely unexplored territory, while regions such as the asteroid belt, the Kuiper belt, and stable orbits around other Solar System planets could also contain non-terrestrial artifacts that have so far escaped human observation. Because of this plenitude of nearby unexplored territory, it would be premature to conclude that the Solar System is absent of non-terrestrial artifacts.

Although the chances of finding non-terrestrial artifacts might be low, the discovery of ETI technology, even if broken and non-functioning, would provide evidence that ETI exist elsewhere in the galaxy and have a profound impact on humankind. This is not to suggest that the search for non-terrestrial technology should be given priority over other astronomical missions; however, as human exploration into the Solar System continues, we may as well keep our eyes open for ETI technology, just in case.

Twenty years ago, way back in the primordial soup of the early Network in an out of the way electromagnetic watering hole called USENET, this correspondent entered the previous millennium’s virtual nexus of survival-of-the-weirdest via an accelerated learning process calculated to evolve a cybernetic avatar from the Corpus Digitalis. Now, as columnist, sci-fi writer and independent filmmaker, [Cognition Factor — 2009], with Terence Mckenna, I have filmed rocket launches and solar eclipses for South African Astronomical Observatories, and produced educational programs for South African Large Telescope (SALT). Latest efforts include videography for the International Astronautical Congress in Cape Town October 2011, and a completed, soon-to-be-released, autobiography draft-titled “Journey to Everywhere”.

Cognition Factor attempts to be the world’s first ‘smart movie’, digitally orchestrated for the fusion of Left and Right Cerebral Hemispheres in order to decode civilization into an articulate verbal and visual language structured from sequential logical hypothesis based upon the following ‘Big Five’ questions,

1.) Evolution Or Extinction?
2.) What Is Consciousness?
3.) Is God A Myth?
4.) Fusion Of Science & Spirit?
5.) What Happens When You Die?

Even if you believe that imagination is more important than knowledge, you’ll need a full deck to solve the ‘Arab Spring’ epidemic, which may be a logical step in the ‘Global Equalisation Process as more and more of our Planet’s Alumni fling their hats in the air and emit primal screams approximating;
“we don’t need to accumulate (so much) wealth anymore”, in a language comprising of ‘post Einsteinian’ mathematics…

Good luck to you if you do…

Schwann Cybershaman

I am taking the advice of a reader of this blog and devoting part 2 to examples of old school and modern movies and the visionary science they portray.

Things to Come 1936 — Event Horizon 1997
Things to Come was a disappointment to Wells and Event Horizon was no less a disappointment to audiences. I found them both very interesting as a showcase for some technology and social challenges.… to come- but a little off the mark in regards to the exact technology and explicit social issues. In the final scene of Things to Come, Raymond Massey asks if mankind will choose the stars. What will we choose? I find this moment very powerful- perhaps the example; the most eloguent expression of the whole genre of science fiction. Event Horizon was a complete counterpoint; a horror movie set in space with a starship modeled after a gothic cathedral. Event Horizon had a rescue crew put in stasis for a high G several month journey to Neptune on a fusion powered spaceship. High accelleration and fusion brings H-bombs to mind, and though not portrayed, this propulsion system is in fact a most probable future. Fusion “engines” are old hat in sci-fi despite the near certainty the only places fusion will ever work as advertised are in a bomb or a star. The Event Horizon, haunted and consigned to hell, used a “gravity drive” to achieve star travel by “folding space.” Interestingly, a recent concept for a black hole powered starship is probably the most accurate forecast of the technology that will be used for interstellar travel in the next century. While ripping a hole in the fabric of space time may be strictly science fantasy, for the next thousand years at least, small singularity propulsion using Hawking radiation to achieve a high fraction of the speed of light is mathematically sound and the most obvious future.

https://lifeboat.com/blog/2012/09/only-one-star-drive-can-work-so-far

That is, if humanity avoids an outbreak of engineered pathogens or any one of several other threats to our existence in that time frame.

Hand in hand with any practical method of journeys to other star systems in the concept of the “sleeper ship.” Not only as inevitable as the submarine or powered flight was in the past, the idea of putting human beings in cold storage would bring tremendous changes to society. Suspended animation using a cryopreservation procedure is by far the most radical and important global event possible, and perhpas probable, in the near future. The ramifications of a revivable whole body cryopreservation procedure are truly incredible. Cryopreservation would be the most important event in the history of mankind. Future generations would certainly mark it as the beginning of “modern” civilization. Though not taken seriously anymore than the possiblility of personal computers were, the advances in medical technology make any movies depicting suspended animation quite prophetic.

The Thing 1951/Them 1954 — Deep Impact 1998/Armegeddon 1998
These four movies were essentially about the same.…thing. Whether a space vampire not from earth in the arctic, mutated super organisms underneath the earth, or a big whatever in outer space on a collision course with earth, the subject was a monstrous threat to our world, the end of humankind on earth being the common theme. The lifeboat blog is about such threats and the The Thing and Them would also appeal to any fan of Barbara Ehrenreich’s book, Blood Rites. It is interesting that while we appreciate in a personal way what it means to face monsters or the supernatural, we just do not “get” the much greater threats only recently revealed by impact craters like Chixculub. In this way these movies dealing with instinctive and non-instinctive realized threats have an important relationship to each other. And this connection extends to the more modern sci-fi creature features of past decades. Just how much the The Thing and Them contributed to the greatest military sci-fi movie of the 20th century (Aliens, of course) will probably never be known. Director James Cameron once paid several million dollars out of court to sci-fi writer Harlan Ellison after admitting during an interview to using Ellison’s work- so he will not be making that mistake again. The second and third place honors, Starship Troopers and Predator, were both efforts of Dutch Film maker Paul Verhoeven.

While The Thing and Them still play well, and Deep Impact, directed by James Cameron’s ex-wife, is a good flick and has uncanny predictive elements such as a black president and a tidal wave, Armegeddon is worthless. I mention this trash cinema only because it is necessary for comparison and to applaud the 3 minutes when the cryogenic fuel transfer procedure is seen to be the farce that it is in actuality. Only one of the worst movie directors ever, or the space tourism industry, would parade such a bad idea before the public.
Ice Station Zebra 1968 — The Road 2009
Ice Station Zebra was supposedly based on a true incident. This cold war thriller featured Rock Hudson as the penultimate submarine commander and was a favorite of Howard Hughes. By this time a recluse, Hughes purchased a Las Vegas TV station so he could watch the movie over and over. For those who have not seen it, I will not spoil the sabotage sequence, which has never been equaled. I pair Ice Station Zebra and The Road because they make a fine quartet, or rather sixtet, with The Thing/Them and Deep Impact/Armegeddon.

The setting for many of the scenes in these movies are a wasteland of ice, desert, cometoid, or dead forest. While Armegeddon is one of the worst movies ever made on a big budget, The Road must be one of the best on a small budget- if accuracy is a measure of best. The Road was a problem for the studio that produced it and release was delayed due to the reaction of the test audiences. All viewers left the theatre profoundly depressed. It is a shockingly realistic movie and disturbed to the point where I started writing about impact deflection. The connection between Armegeddon and The Road, two movies so different, is the threat and aftermath of an asteroid or comet impact. While The Road never specifies an impact as the disaster that ravaged the planet, it fits the story perfectly. Armegeddon has a few accurate statements about impacts mixed in with ludicrous plot devices that make the story a bad experience for anyone concerned with planetary protection. It seems almost blasphemous and positively criminal to make such a juvenile for profit enterprise out of an inevitable event that is as serious as serious gets. Do not watch it. Ice Station Zebra, on the other hand, is a must see and is in essence a showcase of the only tools available to prevent The Road from becoming reality. Nuclear weapons and space craft- the very technologies that so many feared would destroy mankind, are the only hope to save the human race in the event of an impending impact.

Part 3:
Gog 1954 — Stealth 2005
Fantastic Voyage 1966 — The Abyss 1989
And notable moments in miscellaneous movies.

Steamships, locomotives, electricity; these marvels of the industrial age sparked the imagination of futurists such as Jules Verne. Perhaps no other writer or work inspired so many to reach the stars as did this Frenchman’s famous tale of space travel. Later developments in microbiology, chemistry, and astronomy would inspire H.G. Wells and the notable science fiction authors of the early 20th century.

The submarine, aircraft, the spaceship, time travel, nuclear weapons, and even stealth technology were all predicted in some form by science fiction writers many decades before they were realized. The writers were not simply making up such wonders from fanciful thought or childrens ryhmes. As science advanced in the mid 19th and early 20th century, the probable future developments this new knowledge would bring about were in some cases quite obvious. Though powered flight seems a recent miracle, it was long expected as hydrogen balloons and parachutes had been around for over a century and steam propulsion went through a long gestation before ships and trains were driven by the new engines. Solid rockets were ancient and even multiple stages to increase altitude had been in use by fireworks makers for a very long time before the space age.

Some predictions were seen to come about in ways far removed yet still connected to their fictional counterparts. The U.S. Navy flagged steam driven Nautilus swam the ocean blue under nuclear power not long before rockets took men to the moon. While Verne predicted an electric submarine, his notional Florida space gun never did take three men into space. However there was a Canadian weapons designer named Gerald Bull who met his end while trying to build such a gun for Saddam Hussien. The insane Invisible Man of Wells took the form of invisible aircraft playing a less than human role in the insane game of mutually assured destruction. And a true time machine was found easily enough in the mathematics of Einstein. Simply going fast enough through space will take a human being millions of years into the future. However, traveling back in time is still as much an impossibillity as the anti-gravity Cavorite from the First Men in the Moon. Wells missed on occasion but was not far off with his story of alien invaders defeated by germs- except we are the aliens invading the natural world’s ecosystem with our genetically modified creations and could very well soon meet our end as a result.

While Verne’s Captain Nemo made war on the death merchants of his world with a submarine ram, our own more modern anti-war device was found in the hydrogen bomb. So destructive an agent that no new world war has been possible since nuclear weapons were stockpiled in the second half of the last century. Neither Verne or Wells imagined the destructive power of a single missile submarine able to incinerate all the major cities of earth. The dozens of such superdreadnoughts even now cruising in the icy darkness of the deep ocean proves that truth is more often stranger than fiction. It may seem the golden age of predictive fiction has passed as exceptions to the laws of physics prove impossible despite advertisments to the contrary. Science fiction has given way to science fantasy and the suspension of disbelief possible in the last century has turned to disappointment and the distractions of whimsical technological fairy tales. “Beam me up” was simply a way to cut production costs for special effects and warp drive the only trick that would make a one hour episode work. Unobtainium and wishalloy, handwavium and technobabble- it has watered down what our future could be into childish wish fulfillment and escapism.

The triumvirate of the original visionary authors of the last two centuries is completed with E.E. Doc Smith. With this less famous author the line between predictive fiction and science fantasy was first truly crossed and the new genre of “Space Opera” most fully realized. The film industry has taken Space Opera and run with it in the Star Wars franchise and the works of Canadian film maker James Cameron. Though of course quite entertaining, these movies showcase all that is magical and fantastical- and wrong- concerning science fiction as a predictor of the future. The collective imagination of the public has now been conditioned to violate the reality of what is possible through the violent maiming of basic scientific tenets. This artistic license was something Verne at least tried not to resort to, Wells trespassed upon more frequently, and Smith indulged in without reservation. Just as Madonna found the secret to millions by shocking a jaded audience into pouring money into her bloomers, the formula for ripping off the future has been discovered in the lowest kind of sensationalism. One need only attend a viewing of the latest Transformer movie or download Battlestar Galactica to appreciate that the entertainment industry has cashed in on the ignorance of a poorly educated society by selling intellect decaying brain candy. It is cowboys vs. aliens and has nothing of value to contribute to our culture…well, on second thought, I did get watery eyed when the young man died in Harrison Ford’s arms. I am in no way criticizing the profession of acting and value the talent of these artists- it is rather the greed that corrupts the ancient art of storytelling I am unhappy with. Directors are not directors unless they make money and I feel sorry that these incredibly creative people find themselves less than free to pursue their craft.

The archetype of the modern science fiction movie was 2001 and like many legendary screen epics, a Space Odyssey was not as original as the marketing made it out to be. In an act of cinema cold war many elements were lifted from a Soviet movie. Even though the fantasy element was restricted to a single device in the form of an alien monolith, every artifice of this film has so far proven non-predictive. Interestingly, the propulsion system of the spaceship in 2001 was originally going to use atomic bombs, which are still, a half century later, the only practical means of interplanetary travel. Stanly Kubrick, fresh from Dr. Strangelove, was tired of nukes and passed on portraying this obvious future.

As with the submarine, airplane, and nuclear energy, the technology to come may be predicted with some accuracy if the laws of physics are not insulted but rather just rudely addressed. Though in some cases, the line is crossed and what is rude turns disgusting. A recent proposal for a “NautilusX” spacecraft is one example of a completely vulgar denial of reality. Chemically propelled, with little radiation shielding, and exhibiting a ridiculous doughnut centrifuge, such advertising vehicles are far more dishonest than cinematic fabrications in that they decieve the public without the excuse of entertaining them. In the same vein, space tourism is presented as space exploration when in fact the obscene spending habits of the ultra-wealthy have nothing to do with exploration and everything to do with the attendent taxpayer subsidized business plan. There is nothing to explore in Low Earth Orbit except the joys of zero G bordellos. Rudely undressing by way of the profit motive is followed by a rude address to physics when the key private space scheme for “exploration” is exposed. This supposed key is a false promise of things to come.

While very large and very expensive Heavy Lift Rockets have been proven to be successful in escaping earth’s gravitational field with human passengers, the inferior lift vehicles being marketed as “cheap access to space” are in truth cheap and nasty taxis to space stations going in endless circles. The flim flam investors are basing their hopes of big profit on cryogenic fuel depots and transfer in space. Like the filling station every red blooded American stops at to fill his personal spaceship with fossil fuel, depots are the solution to all the holes in the private space plan for “commercial space.” Unfortunately, storing and transferring hydrogen as a liquified gas a few degrees above absolute zero in a zero G environment has nothing in common with filling a car with gasoline. It will never work as advertised. It is a trick. A way to get those bordellos in orbit courtesy of taxpayer dollars. What a deal.

So what is the obvious future that our present level of knowledge presents to us when entertaining the possible and the impossible? More to come.

Wednesday on the Opinion Pages of the NY Times the renowned Vinton Cerf “father of the internet” published an article titles Internet Access Is Not A Human Right. It could be argued that the key word here is “access”, but before I address access again, I should start with the definition of the internet. I had this debate while at Michigan State in October of 2010 with the philosopher Andrew Feenberg. I’ll do my best not to be redundant while everything is still live via the links in this article.

Perhaps the internet requires much more definition, as the roots of the word can be confusing. Inter: situated within – Net: any network or reticulated system of filaments or the like. Its terminology is synonymous with the “web” or a web, which requires multiple linkages to points of initiation in order to exist well. If this is the internet that Feenberg is referring to then I’d think it accurate. However, the internet is not actually a web of ever connected points. Information destinations are not required.

The internet is analogous to space. Regardless of whether or not we access space, its potential exists – we can access or insert entities of sorts into the space regardless of, if another user were present to receive information of sorts from the distributed. Space is a dynamic system of expanding material potential as is the internet’s material potential. The potential of the internet expands as users (or rather, potential users) access to the internet expands – access could come in many forms including, user population(s) growth or by computing speed or by computing power… The internet, regardless of the constraints of the word, it cannot be identified as a specific technology.

While visiting MSU, Feenberg uses a “ramp” as analogous with the internet, which was at the center of his mistake. I don’t mean to read gerontophobic, but based on the pervasive analysis that I’ve witnessed from Feenberg and Cerf’s generation; I’d have to accredit their perspective to the relatively similar changes in technology that they’ve seen during the 20th century. The difference in composition and utility of a technology (hardware, software, methodology) and that of the internet are synonymous with that of an air-craft and the expanding celestial matter beyond earth’s ionosphere (that’s a sufficient analogy).

Cerf wrote “technology is an enabler of rights, not a right itself. There is a high bar for something to be considered a human right.

He is correct! The problem exists when he identifies the internet as a technology, which it cannot be (to be redundant). This is in fact a human rights issue. It is perhaps the most significant human rights issue of our time, because of the internet role in providing the potential for transparencies in the public and private sectors. The deterministic nature of our technologies is bridging the cultural, political, legal, and economic GAPS of all our societies today, and if we as individuals allow a few mistaken “leaders” or the interests of institutions to control our ability to access a space, because of their resume, then we are all doomed. The implications of the masses adopting Cerf and Feenberg’s view on space are tremendous in building an ethically sound environment for human development.

Regarding Cerf’s word “access”, it may provide him an out from his varied rhetoric in the article. Near the end he transitions to civil rights where he writes “the responsibility of technology creators themselves to support human and civil rights” suggesting the internet hold egalitarian virtues. I’m no egalitarian, as it just doesn’t prove feasible in a world of, even, hyper-connected individuals.

While the ability to access an open space should not be prohibited, the technologies of certain kinds could be. Reference weapons of sorts. I’m no advocate for government supplying all of their citizens with camera phone (although it would be great idea for the individual and institution), but I am against governmental and other agents making efforts to restrict the individual’s ability to populate space with their entities aside from the technologies that one would hold on his/her person.

When the United Nations declared the Internet as a Human Right (PDF), they weren’t necessarily evaluating its full potential, but they were stressing that individuals should have the ability to be transparent and review information of all kinds as they so pleased, catering to the collective knowledge of the species and everything it supports. The problem with this article are the future implications of its rhetoric, even as he means well.

Tangent: Cerf having studied math, computer science, and IS for decades; knows as well as anyone that it is virtually (pun intended) impossible to prohibit internet expansion as small pockets of those educated in the knowledge community of development can find a way. Any computer (which would the blockage point) can be hacked its just a matter of time and will. I spent the last year consulting with Hewlett-Packard Global Info Security on multiple acquisitions of competitive companies and security tool providers, and as anyone in the IS/IT security industry can tell you, there are no solutions, only active management of incidents and problems. This is why methodologies are as (if not more) value than hard/software in modern business transactions. So then why wouldn’t Cerf think more thoroughly about this before publishing in the NY Times? Could it be because he has an equity stake (as an employee of multiple firms) in a less open space (internet). Speculation aside, I’m in the business services industry, I studied “control” specifically. Business is about control, which is the value proposition in establishing institutions virtues as separate from those of the individual. We can only forecast and manage risks well in areas that we can define and control. Business itself doesn’t require an suppressive type of control to make good calls on risks. A more transparent world could tell us all (individuals and institutions alike) more about the types of decisions that benefit the most in a society.

In the future let’s all make a conscious effort to keep spaces open and hope that the benefits incentivize philanthropists, entrepreneurs, and governments to provide technology to the masses at a rate that enhances the human condition.

–Originally at Integrationalism

The Journal for Biological & Health Innovation is accepting papers for peer review now. This journal is specific to Africa and our thoughts, theory, research, practice could have a huge impact on the expeditious development of the rest of the world technologically.

After posting a few weeks back on a Richard Dawkins article specific to Jesus and Atheism, I was responded by Lincoln Cannon a post called the New God Argument. I first heard this argument at the University of Utah from Lincoln while visiting the area for a conference.

Its logically sound, when the faith position is adopted. The argument is a derivative or rather an advancement on Nick Bostrom’s Simulation Argument and and Robin Hanson’s Great Filter argument, as the links above will tell anyone is much more detail. I’ve even sited Bostom’s 2003 paper in my own defense after being wrongfully labeled as an atheist. Its one thing to state that there is no God (atheism) or that we cant know if there is a God (agnosticism), and quite another to state that we could create or evolve into one or a vast many.

I think that Lincoln’s argument progressive and may provide the next wave of theology arguments in their defense this century. It’s fascinating to see how far the modern human mind can go in its extrapolation of our exiting technological potential. As Lincoln puts it, the logical truth that post-humans have a probability of.….……

[from Lincoln’s — angel argument, benevolence argument, and creation argument]
posthumans probably already exist
AND posthumans probably are more benevolent than us
AND posthumans probably created our world

After reading the argument I’m compelled to revisit my previous writings on spirituality. When I wrote that I was NOT and atheist I was leaving open the possibility (because of the probability) that we, as the new God argument reads, wont become extinct before becoming post-human. I was also relying on the probability that we could potentially create civilizations, worlds, galaxies, universes, multiverses, with humanoid or homo sapien like individuals. Having stated that I think that Lincoln and my definition of the God figure are much different.

When I reference the term God I’m only meaning to represent a creator figure; I am however, excluding the potential for this figure to intervene in those created lives/world/simulation. I cant find rationale that suggests the creator figure would have any incentive to intervene to interact as benevolent or otherwise.

Physics dis-Incentives: I think that there would first exist some very rigid code (computer language) that manifests in what we understand as our physical laws. Plenty of traditional atheists have identified the inconsistencies in physics as a cornerstone in their rebuttal to the spiritual realm. Their point being, physics is the great divide between what we are/can-be and what we cannot.

Management dis-Incentives: I don’t think that the creator figure would have the incentive to modify imperfections that it sees in its creation, because of the potential to recreate duplicates to modify with a searchable history for analysis are so attractive. We see these types of practices happening currently in the Information Technology (IT) industry becoming more common as computing power/speed/space become greater/faster/more abundant respectively. While There is the potential for the multiple creators in different places and times during a continuous evolution of (what some would call) our current transhuman being, to create existences like our own, they would all be quite different depending on the technology available, and unlikely curated to take advantage of the latest technologies available because of the obsolescence that exponential technological growth provides.

Economics dis-Incentives: Similar to the argument that I made in 2010 at Transhumanism & Spirituality the context in which individuals identify with “their own” spirits and a “supreme” spirit are inconsistent with the spirit having any potential actually interact on the individual’s behalf, in where, it connects the individual with physical being. The arbitrage or competition phenomenon in a competitive situation would create definite dis-incentives for benevolence.

To go a bit further, I would like to take a tangent from Lincoln’s progressive Mormon Transhumanist philosophy and bring into consideration the ideal that some Christian’s subscribe to regarding the tangible or physical creations by spiritual beings or God (see page 3); and further, spirituality being a tangible phenomenon.

Simply, there would be physical traces of spiritual activity if at any point there were any other-than-physical interactions in our physical realm. Prayers and miracles for instance would have physical manifestations. One of my favorites is walking on water or even flying. I’m reminded of the elementary science projects where student turn solids into liquids and finally into gasses. In order for either of the aforementioned miracles to happen the physical properties of air or water would have to change from less dense to more dense, in an almost instantaneous fashion.…but there are simply no traces of that type of activity. The ideal that non-physical beings are more relevant to our physical realm is (in my opinion) invalid, and in fact provides a brand of ego-centric hope that ails human kind’s potential for real harmonious interaction.

The faith assumption is the cornerstone of The New God Argument, not the probability logic behind the benevolence argument. This should be conversely true considering the “value proposition” of spirituality: connectivity (or human connections).

It could be argued that I am faithful in human-kind’s ability to generate a desirable future and create linkages between persons without any need for a creator figure to intervene, generating an organic omnipresent benevolence. And even as I have coined myself as someone with no beliefs at all, I would keep that all we have is our opportunity to live and create connections…and dream of benevolence by using our technologies to create situations where resources of sorts are NOT scarce, and creating environments where we have incentives to connect. Faith is no substitute for rationale and action.

- from the Integrationalism blog

Video — U.S. Job Market — People Staying in Jobs Longer — WSJ.com.

The Cleveland Fed shows research that people staying in jobs for longer periods of time is requiring adding the economic shock of any crisis where lay-offs or retraction is involved. The problem with this is that research also shows that people out of work are less likely ever re-enter the work force.

While economists (per the this interview) wouldn’t look at this as a “structure problem” because of the forecasted potential for worker volume to return, it is likely that their opinions are a bit too faithful in the existing model of compensating laborers for a honest days work. The enduring jobs crisis can and should of course be looked at as an economic issue and even a political issue, but it would likely be better pursued as a socio-cultural and a legal issue.

The ideal of honesty and the preferred compensation for ones good work is perhaps too subjective; having stated that, the ability for an individual to own so greatly in lieu of the potentially many other individuals that cater to the discovery, development, and distribution of goods/services is (in my opinion) the root cause of our (nation, states, humans) wealth distribution and compensation problems.