The recent Skeptical Enquirer article linked to this site proclaiming antimatter propulsion as “pseudoscience” was.….wrong.
Antimatter will have to be produced in quantity to be used for propulsion but very small quantities may be all that is required for an interim system using antimatter to ignite fusion reactions.
It may be that some people pushing their own miracle solutions do not like other more practical possibilities.
Unlike any type of gravity manipulation, anti-matter is a fact. Anti-matter catalyzed fusion is a possible method of interstellar propulsion; far more in the realm of possibility than anti-gravity.
They found yet another reason to build nuclear interceptors to deflect asteroids and comet impact threats.
Sooner or later something is going to hit us. It could be like Tunguska in 1908 and destroy a city instead of a forest in Siberia- or it could be like what hit the Yucatan 65 million years ago.
Except just a little bigger and nothing larger than bacteria will survive. There is nothing written anywhere that says it will not happen tomorrow.
The wailing and gnashing of teeth over spending money on space never seems to cross over to DOD programs where obscene amounts of tax dollars are spent on cold war toys used to fight mountain tribesmen with Kalashnikovs.
For example: http://www.bloomberg.com/news/2012-02-13/navy-discloses-811-million-overrun-on-gerald-ford-carrier.html
The completed initial aircraft carrier, the first of three in the $40.2 billion program, is projected to cost at least $11.5 billion.
Famous Chilean philosopher Humberto Maturana describes “certainty” in science as subjective emotional opinion and astonishes the physicists’ prominence. French astronomer and “Leonardo” publisher Roger Malina hopes that the LHC safety issue would be discussed in a broader social context and not only in the closer scientific framework of CERN.
The latest renowned “Ars Electronica Festival” in Linz (Austria) was dedicated in part to an uncritical worship of the gigantic particle accelerator LHC (Large Hadron Collider) at the European Nuclear Research Center CERN located at the Franco-Swiss border. CERN in turn promoted an art prize with the idea to “cooperate closely” with the arts. This time the objections were of a philosophical nature – and they had what it takes.
In a thought provoking presentation Maturana addressed the limits of our knowledge and the intersubjective foundations of what we call “objective” and “reality.” His talk was spiked with excellent remarks and witty asides that contributed much to the accessibility of these fundamental philosophical problems: “Be realistic, be objective!” Maturana pointed out, simply means that we want others to adopt our point of view. The great constructivist and founder of the concept of autopoiesis clearly distinguished his approach from a solipsistic position.
Given Ars Electronica’s spotlight on CERN and its experimental sub-nuclear research reactor, Maturana’s explanations were especially important, which to the assembled CERN celebrities may have come in a mixture of an unpleasant surprise and a lack of relation to them.
During the question-and-answer period, Markus Goritschnig asked Maturana whether it wasn’t problematic that CERN is basically controlling itself and discarding a number of existential risks discussed related to the LHC — including hypothetical but mathematically demonstrable risks also raised — and later downplayed — by physicists like Nobel Prize winner Frank Wilczek, and whether he thought it necessary to integrate in the LHC safety assessment process other sciences aside from physics such as risk search. In response Maturana replied (in the video from about 1:17): “We human beings can always reflect on what we are doing and choose. And choose to do it or not to do it. And so the question is, how are we scientists reflecting upon what we do? Are we taking seriously our responsibility of what we do? […] We are always in the danger of thinking that, ‘Oh, I have the truth’, I mean — in a culture of truth, in a culture of certainty — because truth and certainty are not as we think — I mean certainty is an emotion. ‘I am certain that something is the case’ means: ‘I do not know’. […] We cannot pretend to impose anything on others; we have to create domains of interrogativity.”
Disregarding these reflections, Sergio Bertolucci (CERN) found the peer review system among the physicists’ community a sufficient scholarly control. He refuted all the disputed risks with the “cosmic ray argument,” arguing that much more energetic collisions are naturally taking place in the atmosphere without any adverse effect. This safety argument by CERN on the LHC, however, can also be criticized under different perspectives, for example: Very high energetic collisions could be measured only indirectly — and the collision frequency under the unprecedented artificial and extreme conditions at the LHC is of astronomical magnitudes higher than in the Earth’s atmosphere and anywhere else in the nearer cosmos.
The second presentation of the “Origin” Symposium III was held by Roger Malina, an astrophysicist and the editor of “Leonardo” (MIT Press), a leading academic journal for the arts, sciences and technology.
Malina opened with a disturbing fact: “95% of the universe is of an unknown nature, dark matter and dark energy. We sort of know how it behaves. But we don’t have a clue of what it is. It does not emit light, it does not reflect light. As an astronomer this is a little bit humbling. We have been looking at the sky for millions of years trying to explain what is going on. And after all of that and all those instruments, we understand only 3% of it. A really humbling thought. […] We are the decoration in the universe. […] And so the conclusion that I’d like to draw is that: We are really badly designed to understand the universe.”
The main problem in research is: “curiosity is not neutral.” When astrophysics reaches its limits, cooperation between arts and science may indeed be fruitful for various reasons and could perhaps lead to better science in the end. In a later communication Roger Malina confirmed that the same can be demonstrated for the relation between natural sciences and humanities or social sciences.
However, the astronomer emphasized that an “art-science collaboration can lead to better science in some cases. It also leads to different science, because by embedding science in the larger society, I think the answer was wrong this morning about scientists peer-reviewing themselves. I think society needs to peer-review itself and to do that you need to embed science differently in society at large, and that means cultural embedding and appropriation. Helga Nowotny at the European Research Council calls this ‘socially robust science’. The fact that CERN did not lead to a black hole that ended the world was not due to peer-review by scientists. It was not due to that process.”
One of Malina’s main arguments focused on differences in “the ethics of curiosity”. The best ethics in (natural) science include notions like: intellectual honesty, integrity, organized scepticism, dis-interestedness, impersonality, universality. “Those are the believe systems of most scientists. And there is a fundamental flaw to that. And Humberto this morning really expanded on some of that. The problem is: Curiosity is embodied. You cannot make it into a neutral ideal of scientific curiosity. And here I got a quote of Humberto’s colleague Varela: “All knowledge is conditioned by the structure of the knower.”
In conclusion, a better co-operation of various sciences and skills is urgently necessary, because: “Artists asks questions that scientists would not normally ask. Finally, why we want more art-science interaction is because we don’t have a choice. There are certain problems in our society today that are so tough we need to change our culture to resolve them. Climate change: we’ve got to couple the science and technology to the way we live. That’s a cultural problem, and we need artists working on that with the scientists every day of the next decade, the next century, if we survive it.
Then Roger Malina directly turned to the LHC safety discussion and articulated an open contradiction to the safety assurance pointed out before: He would generally hope for a much more open process concerning the LHC safety debate, rather than discussing this only in a narrow field of particle physics, concrete: “There are certain problems where we cannot cloister the scientific activity in the scientific world, and I think we really need to break the model. I wish CERN, when they had been discussing the risks, had done that in an open societal context, and not just within the CERN context.”
Presently CERN is holding its annual meeting in Chamonix to fix LHC’s 2012 schedules in order to increase luminosity by a factor of four for maybe finally finding the Higgs Boson – against a 100-Dollar bet of Stephen Hawking who is convinced of Micro Black Holes being observed instead, immediately decaying by hypothetical “Hawking Radiation” — with God Particle’s blessing. Then it would be himself gaining the Nobel Prize Hawking pointed out. Quite ironically, at Ars Electronica official T-Shirts were sold with the “typical signature” of a micro black hole decaying at the LHC – by a totally hypothetical process involving a bunch of unproven assumptions.
In 2013 CERN plans to adapt the LHC due to construction failures for up to CHF 1 Billion to run the “Big Bang Machine” at double the present energies. A neutral and multi-disciplinary risk assessment is still lacking, while a couple of scientists insist that their theories pointing at even global risks have not been invalidated. CERN’s last safety assurance comparing natural cosmic rays hitting the Earth with the LHC experiment is only valid under rather narrow viewpoints. The relatively young analyses of high energetic cosmic rays are based on indirect measurements and calculations. Sort, velocity, mass and origin of these particles are unknown. But, taking the relations for granted and calculating with the “assuring” figures given by CERN PR, within ten years of operation, the LHC under extreme and unprecedented artificial circumstances would produce as many high energetic particle collisions as occur in about 100.000 years in the entire atmosphere of the Earth. Just to illustrate the energetic potential of the gigantic facility: One LHC-beam, thinner than a hair, consisting of billions of protons, has got the power of an aircraft carrier moving at 12 knots.
This article in the Physics arXiv Blog (MIT’s Technology Review) reads: “Black Holes, Safety, and the LHC Upgrade — If the LHC is to be upgraded, safety should be a central part of the plans.”, closing with the claim: “What’s needed, of course, is for the safety of the LHC to be investigated by an independent team of scientists with a strong background in risk analysis but with no professional or financial links to CERN.” http://www.technologyreview.com/blog/arxiv/27319/
Australian ethicist and risk researcher Mark Leggett concluded in a paper that CERN’s LSAG safety report on the LHC meets less than a fifth of the criteria of a modern risk assessment. There but for the grace of a goddamn particle? Probably not. Before pushing the LHC to its limits, CERN must be challenged by a really neutral, external and multi-disciplinary risk assessment.
Video recordings of the “Origin III” symposium at Ars Electronica: Presentation Humberto Maturana:
Communication on LHC Safety directed to CERN Feb 10 2012 For a neutral and multidisciplinary risk assessment to be done before any LHC upgrade http://lhc-concern.info/?page_id=139
More info, links and transcripts of lectures at “LHC-Critique — Network for Safety at experimental sub-nuclear Reactors”:
If nothing else, Japan’s recent tragedy has brought the risk of current nuclear power plants back into focus. While it’s far to early to tell just how grave the Fukushima situation truly is, it is obvious that our best laid plans are inadequate as they relate to engineering facilities to withstand cataclysmic scale events.
Few places on the globe are as well prepared as Japan for earthquakes and the possibility of subsequent tsunamis. However, in spite of their preparedness — which was evidenced by the remarkably small number of casualties given the nature of the events that took place (can you imagine how many people would have perished had this same disaster struck somewhere else in the world?) — Japan’s ability to manage a damaged nuclear power plant was severely compromised.
As frightening as Japan’s situation is, what ought to frighten us even more is that there are many more nuclear power plants in equally vulnerable locations all over the globe. In California, for example, both the San Onofre and Diablo Canyon facilities are right on the coast (they both use ocean water for cooling) and the Diablo Canyon facility in particular is perilously close to a major fault.
Given what we’ve seen in Japan, the widely varying degrees of preparedness around the world, the age of many of the existing power plants and the consequences for even a single catastrophic containment failure, shouldn’t we be taking a long, hard look at nuclear power as a viable means of providing energy for the planet? Have we learned so little from Three Mile Island, Chernobyl, and now Fukushima? Just how capable are we [really] of dealing with a second, a third or a fourth disaster of this type? (and what if they were to happen simultaneously?) With so many existential risks completely beyond our control, does it make sense to add another one when there are other, lower risk alternatives to nuclear energy within our reach?
Below is a Pearltree documenting the situation and management of the damaged Fukushima reactors. Obviously, the news is grave, but imagine if this same situation had transpired in Chile.
NOTE: to see the contents of any of the links in this pearltree, just mouse-over the pearl. To see the whole page, simply click it.
It is interesting to note that the technical possibility to send interstellar Ark appeared in 1960th, and is based on the concept of “Blust-ship” of Ulam. This blast-ship uses the energy of nuclear explosions to move forward. Detailed calculations were carried out under the project “Orion”. http://en.wikipedia.org/wiki/Project_Orion_(nuclear_propulsion) In 1968 Dyson published an article “Interstellar Transport”, which shows the upper and lower bounds of the projects. In conservative (ie not imply any technical achievements) valuation it would cost 1 U.S. GDP (600 billion U.S. dollars at the time of writing) to launch the spaceship with mass of 40 million tonnes (of which 5 million tons of payload), and its time of flight to Alpha Centauri would be 1200 years. In a more advanced version the price is 0.1 U.S. GDP, the flight time is 120 years and starting weight 150 000 tons (of which 50 000 tons of payload). In principle, using a two-tier scheme, more advanced thermonuclear bombs and reflectors the flying time to the nearest star can reduce to 40 years. Of course, the crew of the spaceship is doomed to extinction if they do not find a habitable and fit for human planet in the nearest star system. Another option is that it will colonize uninhabited planet. In 1980, R. Freitas proposed a lunar exploration using self-replicating factory, the original weight of 100 tons, but to control that requires artificial intelligence. “Advanced Automation for Space Missions” http://www.islandone.org/MMSG/aasm/ Artificial intelligence yet not exist, but the management of such a factory could be implemented by people. The main question is how much technology and equipment should be enough to throw at the moonlike uninhabited planet, so that people could build on it completely self-sustaining and growing civilization. It is about creating something like inhabited von Neumann probe. Modern self-sustaining state includes at least a few million people (like Israel), with hundreds of tons of equipment on each person, mainly in the form of houses, roads. Weight of machines is much smaller. This gives us the upper boundary of the able to replicate human colony in the 1 billion tons. The lower estimate is that there would be about 100 people, each of which accounts for approximately 100 tons (mainly food and shelter), ie 10 000 tons of mass. A realistic assessment should be somewhere in between, and probably in the tens of millions of tons. All this under the assumption that no miraculous nanotechnology is not yet open. The advantage of a spaceship as Ark is that it is non-specific reaction to a host of different threats with indeterminate probabilities. If you have some specific threat (the asteroid, the epidemic), then there is better to spend money on its removal. Thus, if such a decision in the 1960th years were taken, now such a ship could be on the road. But if we ignore the technical side of the issue, there are several trade-offs on strategies for creating such a spaceship. 1. The sooner such a project is started, the lesser technically advanced it would be, the lesser would be its chances of success and higher would be cost. But if it will be initiated later, the greater would be chances that it will not be complete until global catastrophe. 2. The later the project starts, the greater are the chance that it will take “diseases” of mother civilization with it (e.g. ability to create dangerous viruses ). 3. The project to create a spaceship could lead to the development of technologies that threaten civilization itself. Blast-ship used as fuel hundreds of thousands of hydrogen bombs. Therefore, it can either be used as a weapon, or other party may be afraid of it and respond. In addition, the spaceship can turn around and hit the Earth, as star-hammer — or there maybe fear of it. During construction of the spaceship could happen man-made accidents with enormous consequences, equal as maximum to detonation of all bombs on board. If the project is implementing by one of the countries in time of war, other countries could try to shoot down the spaceship when it launched. 4. The spaceship is a means of protection against Doomsday machine as strategic response in Khan style. Therefore, the creators of such a Doomsday machine can perceive the Ark as a threat to their power. 5. Should we implement a more expensive project, or a few cheaper projects? 6. Is it sufficient to limit the colonization to the Moon, Mars, Jupiter’s moons or objects in the Kuiper belt? At least it can be fallback position at which you can check the technology of autonomous colonies. 7. The sooner the spaceship starts, the less we know about exoplanets. How far and how fast the Ark should fly in order to be in relative safety? 8. Could the spaceship hide itself so that the Earth did not know where it is, and should it do that? Should the spaceship communicate with Earth? Or there is a risk of attack of a hostile AI in this case? 9. Would not the creation of such projects exacerbate the arms race or lead to premature depletion of resources and other undesirable outcomes? Creating of pure hydrogen bombs would simplify the creation of such a spaceship, or at least reduce its costs. But at the same time it would increase global risks, because nuclear non-proliferation will suffer complete failure. 10. Will the Earth in the future compete with its independent colonies or will this lead to Star Wars? 11. If the ship goes off slowly enough, is it possible to destroy it from Earth, by self-propelling missile or with radiation beam? 12. Is this mission a real chance for survival of the mankind? Flown away are likely to be killed, because the chance of success of the mission is no more than 10 per cent. Remaining on the Earth may start to behave more risky, in logic: “Well, if we have protection against global risks, now we can start risky experiments.” As a result of the project total probability of survival decreases. 13. What are the chances that its computer network of the Ark will download the virus, if it will communicate with Earth? And if not, it will reduce the chances of success. It is possible competition for nearby stars, and faster machines would win it. Eventually there are not many nearby stars at distance of about 5 light years — Alpha Centauri, the Barnard star, and the competition can begin for them. It is also possible the existence of dark lonely planets or large asteroids without host-stars. Their density in the surrounding space should be 10 times greater than the density of stars, but to find them is extremely difficult. Also if nearest stars have not any planets or moons it would be a problem. Some stars, including Barnard, are inclined to extreme stellar flares, which could kill the expedition. 14. The spaceship will not protect people from hostile AI that finds a way to catch up. Also in case of war starships may be prestigious, and easily vulnerable targets — unmanned rocket will always be faster than a spaceship. If arks are sent to several nearby stars, it does not ensure their secrecy, as the destination will be known in advance. Phase transition of the vacuum, the explosion of the Sun or Jupiter or other extreme event can also destroy the spaceship. See e.g. A.Bolonkin “Artificial Explosion of Sun. AB-Criterion for Solar Detonation” http://www.scribd.com/doc/24541542/Artificial-Explosion-of-Sun-AB-Criterion-for-Solar-Detonation 15. However, the spaceship is too expensive protection from many other risks that do not require such far removal. People could hide from almost any pandemic in the well-isolated islands in the ocean. People can hide on the Moon from gray goo, collision with asteroid, supervolcano, irreversible global warming. The ark-spaceship will carry with it problems of genetic degradation, propensity for violence and self-destruction, as well as problems associated with limited human outlook and cognitive biases. Spaceship would only burden the problem of resource depletion, as well as of wars and of the arms race. Thus, the set of global risks from which the spaceship is the best protection, is quite narrow. 16. And most importantly: does it make sense now to begin this project? Anyway, there is no time to finish it before become real new risks and new ways to create spaceships using nanotech. Of course it easy to envision nano and AI based Ark – it would be small as grain of sand, carry only one human egg or even DNA information, and could self-replicate. The main problem with it is that it could be created only ARTER the most dangerous period of human existence, which is the period just before Singularity.
50 years ago Herman Khan coined the term in his book “On thermonuclear war”. His ideas are still important. Now we can read what he really said online. His main ideas are that DM is feasable, that it will cost around 10–100 billion USD, it will be much cheaper in the future and there are good rational reasons to built it as ultimate mean of defence, but better not to built it, because it will lead to DM-race between states with more and more dangerous and effective DM as outcome. And this race will not be stable, but provoking one side to strike first. This book and especially this chapter inspired “Dr. Strangelove” movie of Kubrick. Herman Khan. On Doomsday machine.