Toggle light / dark theme

It’s been a while since anyone contributed a post on space exploration here on the Lifeboat blogs, so I thought I’d contribute a few thoughts on the subject of potential hazards to interstellar travel in the future — if indeed humanity ever attempts to explore that far in space.

It is only recently that the Voyager probes provided us with some idea of the nature of the boundary of our solar system with what is commonly referred to as the local fluff, The Local Interstellar Cloud, through which we have been travelling for the past 100,000 years or so, and which we will continue to travel through for another 10,000 or 20,000 years yet. The cloud has a temperate of about 6000°C — albeit very tenuous.

We are protected by the effects of the local fluff by the solar wind and the sun’s magnetic field, the front between the two just beyond the termination shock where the solar wind slows to subsonic velocities. Here, in the heliosheath, the solar wind becomes turbulent by its interaction with the interstellar medium, and keeping the interstellar medium at bay from the inners of the solar system, the region currently under study by the Voyager 1 and Voyager 2 space probes. It has been hypothesised that there may be a hydrogen wall further out between the bow shock and the heliopause composed of ISM interacting with the edge of the heliosphere, another obstacle to consider with interstellar travel.

The short end of the stick is that what many consider ‘open space’ to traverse once we get beyond the Kuiper belt may in fact be many more mission-threatening obstacles to traverse to reach beyond our solar system. Opinions welcome. I am not an expert on this.

RMS <em>Titanic</em> Sails
What’s to worry? RMS Titanic departs Southampton.

This year marks the 100th anniversary of the Titanic disaster in 1912. What better time to think about lifeboats?

One way to start a discussion is with some vintage entertainment. On the centenary weekend of the wreck of the mega-liner, our local movie palace near the Hudson River waterfront ran a triple bill of classic films about maritime disasters: A Night to Remember, Lifeboat, and The Poseidon Adventure. Each one highlights an aspect of the lifeboat problem. They’re useful analogies for thinking about the existential risks of booking a passage on spaceship Earth.

Can’t happen…

A Night to Remember frames the basic social priorities: Should we have lifeboats and who are they for? Just anybody?? When William McQuitty produced his famous 1958 docudrama of the Titanic’s last hours, the answers were blindingly obvious – of course we need lifeboats! They’re for everyone and there should be enough! Where is that moral certainty these days? And whatever happened to the universal technological optimism of 1912? For example, certain Seasteaders guarantee your rights – and presumably a lifeboat seat – only as long as your dues are paid. Libertarians privatize public goods, run them into the ground, squeeze out every dime, move the money offshore, and then dictate budget priorities in their own interest. Malthusians handle the menu planning. And the ship’s captain just might be the neo-feudal Prince Philip, plotting our course back to his Deep Green Eleventh Century.

Tallulah Bankhead in <em>Lifeboat</em>
Think Mink and Don’t Sink: Talulah Bankhead in Hitchcock’s Lifeboat.

Alfred Hitchcock’s Lifeboat deals with the problems of being in one. For a very long time – unlike the lucky stiffs on the Titanic, who were picked up in 2 hours. Specifically, it’s about a motley group of passengers thrown together in an open boat with short provisions, no compass, and no certain course. And, oh yes, the skipper is their mortal enemy: The lifeboat is helmed by the U-boat commander who torpedoed their ship. He overawes them with seafaring expertise and boundless energy (thanks to the speed pills in his secret stash) and then lulls them by singing sentimental German lieder. At night, the captain solves his problems of supply and authority by culling the injured passengers while everyone’s asleep. He tells the survivors they’re going to Bermuda. They’re actually headed for a rendezvous with his supply ship – and from there the slow boat to Buchenwald. The point of Lifeboat is simple: What can you do in your life and environment so you never, ever end up in one?

What’s wrong with this picture?

Risk avoidance is the moral of The Poseidon Adventure. A glorious old ocean liner, the Poseidon, is acquired by new owners who plan to scrap it. But these clever operators maximize shareholder value by billing the ship’s final voyage as a New Year’s cruise to Greece. They take on every paying passenger they can find, barter with a band to get free entertainment, and drive the underloaded ship hard and fast into the stormy winter Mediterranean over the protests of the captain and seasick travelers. At this point an undersea earthquake triggers a 90-foot tsunami, and despite ample warnings this monster wave broadsides the top-heavy liner at midnight, during the New Year’s party. First the ball drops. Then the other shoe drops. The result is the ultimate “Bottoms Up!”

And the takeaway of The Poseidon Adventure applies to all of the films and to life in general, not to mention the next few generations on the planet. As David McCollough’s famously concluded in The Johnstown Flood, it can be a fatal assumption ‘that the people who were responsible for your safety will act responsibly.’

You can have a ripping good time watching these old movies. And as futurists, sociologists, planners, catastrophists, humanists or transhumanists, you can conjure with them, too. Icebergs and U-boats have ceased to menace – of cruise ships, I say nothing.

But the same principles of egalitarianism, legitimacy, non-beligerence and prudential planning apply to Earth-crossing asteroids, CERN’s operations and program, Nano-Bio-Info-Cogno manipulations, monetary policy and international finance, and NATO deployments present and future.

Or do they? And if they do, who says so?

Ship beautiful — the Aquitania on her way.

CC BY-NC-ND Clark Matthews and The Lifeboat Foundation

Creative Commons License
Earth’s Titanic Challenges by Clark Matthews is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.
Permissions beyond the scope of this license may be available at https://lifeboat.com.

GatgetBridge is currently just a concept. It might start its life as a discussion forum, later turn into a network or an organisation and hopefully inspire a range of similar activities.

We will soon be able to use technology to make ourselves more intelligent, feel happier or change what motivates us. When the use of such technologies is banned, the nations or individuals who manage to cheat will soon lord it over their more obedient but unfortunately much dimmer fellows. When these technologies are made freely available, a few terrorists and psychopaths will use them to cause major disasters. Societies will have to find ways to spread these mind enhancement treatments quickly among the majority of their citizens, while keeping them from the few who are likely to cause harm. After a few enhancement cycles, the most capable members of such societies will all be “trustworthy” and use their skills to stabilise the system (see “All In The Mind”).

But how can we manage the transition period, the time in which these technologies are powerful enough to be abused but no social structures are yet in place to handle them? It might help to use these technologies for entertainment purposes, so that many people learn about their risks and societies can adapt (see “Should we build a trustworthiness tester for fun”). But ideally, a large, critical and well-connected group of technology users should be part of the development from the start and remain involved in every step.

To do that, these users would have to spend large amounts of money and dedicate considerable manpower. Fortunately, the basic spending and working patterns are in place: People already use a considerable part of their income to buy consumer devices such as mobile phones, tablet computers and PCs and increasingly also accessories such as blood glucose meters, EEG recorders and many others; they also spend a considerable part of their time to get familiar with these devices. Manufacturers and software developers are keen to turn any promising technology into a product and over time this will surely include most mind measuring and mind enhancement technologies. But for some critical technologies this time might be too long. GadgetBridge is there to shorten it as follows:

- GadgetBridge spreads its philosophy — that mind-enhancing technologies are only dangerous when they are allowed to develop in isolation — that spreading these technologies makes a freer world more likely — and that playing with innovative consumer gadgets is therefore not just fun but also serves a good cause.

- Contributors make suggestions for new consumer devices based on the latest brain research and their personal experiences. Many people have innovative ideas but few are in a position to exploit them. Contributors rather donate their ideas that see them wither away or claimed by somebody else.

- All ideas are immediately published and offered free of charge to anyone who wants to use them. Companies select and implement the best options. Users buy their products and gain hands-on experience with the latest mind measurement and mind enhancement technologies. When risks become obvious, concerned users and governments look for ways to cope with them before they get out of hand.

- Once GadgetBridge produces results, it might attract funding from the companies that have benefited or hope to benefit from its services. GadgetBridge might then organise competitions, commission feasibility studies or develop a structure that provides modest rewards to successful contributors.

Your feedback is needed! Please be honest rather than polite: Could GadgetBridge make a difference?

Famous Chilean philosopher Humberto Maturana describes “certainty” in science as subjective emotional opinion and astonishes the physicists’ prominence. French astronomer and “Leonardo” publisher Roger Malina hopes that the LHC safety issue would be discussed in a broader social context and not only in the closer scientific framework of CERN.

(Article published in “oekonews”: http://oekonews.at/index.php?mdoc_id=1067777 )

The latest renowned “Ars Electronica Festival” in Linz (Austria) was dedicated in part to an uncritical worship of the gigantic particle accelerator LHC (Large Hadron Collider) at the European Nuclear Research Center CERN located at the Franco-Swiss border. CERN in turn promoted an art prize with the idea to “cooperate closely” with the arts. This time the objections were of a philosophical nature – and they had what it takes.

In a thought provoking presentation Maturana addressed the limits of our knowledge and the intersubjective foundations of what we call “objective” and “reality.” His talk was spiked with excellent remarks and witty asides that contributed much to the accessibility of these fundamental philosophical problems: “Be realistic, be objective!” Maturana pointed out, simply means that we want others to adopt our point of view. The great constructivist and founder of the concept of autopoiesis clearly distinguished his approach from a solipsistic position.

Given Ars Electronica’s spotlight on CERN and its experimental sub-nuclear research reactor, Maturana’s explanations were especially important, which to the assembled CERN celebrities may have come in a mixture of an unpleasant surprise and a lack of relation to them.

During the question-and-answer period, Markus Goritschnig asked Maturana whether it wasn’t problematic that CERN is basically controlling itself and discarding a number of existential risks discussed related to the LHC — including hypothetical but mathematically demonstrable risks also raised — and later downplayed — by physicists like Nobel Prize winner Frank Wilczek, and whether he thought it necessary to integrate in the LHC safety assessment process other sciences aside from physics such as risk search. In response Maturana replied (in the video from about 1:17): “We human beings can always reflect on what we are doing and choose. And choose to do it or not to do it. And so the question is, how are we scientists reflecting upon what we do? Are we taking seriously our responsibility of what we do? […] We are always in the danger of thinking that, ‘Oh, I have the truth’, I mean — in a culture of truth, in a culture of certainty — because truth and certainty are not as we think — I mean certainty is an emotion. ‘I am certain that something is the case’ means: ‘I do not know’. […] We cannot pretend to impose anything on others; we have to create domains of interrogativity.”

Disregarding these reflections, Sergio Bertolucci (CERN) found the peer review system among the physicists’ community a sufficient scholarly control. He refuted all the disputed risks with the “cosmic ray argument,” arguing that much more energetic collisions are naturally taking place in the atmosphere without any adverse effect. This safety argument by CERN on the LHC, however, can also be criticized under different perspectives, for example: Very high energetic collisions could be measured only indirectly — and the collision frequency under the unprecedented artificial and extreme conditions at the LHC is of astronomical magnitudes higher than in the Earth’s atmosphere and anywhere else in the nearer cosmos.

The second presentation of the “Origin” Symposium III was held by Roger Malina, an astrophysicist and the editor of “Leonardo” (MIT Press), a leading academic journal for the arts, sciences and technology.

Malina opened with a disturbing fact: “95% of the universe is of an unknown nature, dark matter and dark energy. We sort of know how it behaves. But we don’t have a clue of what it is. It does not emit light, it does not reflect light. As an astronomer this is a little bit humbling. We have been looking at the sky for millions of years trying to explain what is going on. And after all of that and all those instruments, we understand only 3% of it. A really humbling thought. […] We are the decoration in the universe. […] And so the conclusion that I’d like to draw is that: We are really badly designed to understand the universe.”

The main problem in research is: “curiosity is not neutral.” When astrophysics reaches its limits, cooperation between arts and science may indeed be fruitful for various reasons and could perhaps lead to better science in the end. In a later communication Roger Malina confirmed that the same can be demonstrated for the relation between natural sciences and humanities or social sciences.

However, the astronomer emphasized that an “art-science collaboration can lead to better science in some cases. It also leads to different science, because by embedding science in the larger society, I think the answer was wrong this morning about scientists peer-reviewing themselves. I think society needs to peer-review itself and to do that you need to embed science differently in society at large, and that means cultural embedding and appropriation. Helga Nowotny at the European Research Council calls this ‘socially robust science’. The fact that CERN did not lead to a black hole that ended the world was not due to peer-review by scientists. It was not due to that process.”

One of Malina’s main arguments focused on differences in “the ethics of curiosity”. The best ethics in (natural) science include notions like: intellectual honesty, integrity, organized scepticism, dis-interestedness, impersonality, universality. “Those are the believe systems of most scientists. And there is a fundamental flaw to that. And Humberto this morning really expanded on some of that. The problem is: Curiosity is embodied. You cannot make it into a neutral ideal of scientific curiosity. And here I got a quote of Humberto’s colleague Varela: “All knowledge is conditioned by the structure of the knower.”

In conclusion, a better co-operation of various sciences and skills is urgently necessary, because: “Artists asks questions that scientists would not normally ask. Finally, why we want more art-science interaction is because we don’t have a choice. There are certain problems in our society today that are so tough we need to change our culture to resolve them. Climate change: we’ve got to couple the science and technology to the way we live. That’s a cultural problem, and we need artists working on that with the scientists every day of the next decade, the next century, if we survive it.

Then Roger Malina directly turned to the LHC safety discussion and articulated an open contradiction to the safety assurance pointed out before: He would generally hope for a much more open process concerning the LHC safety debate, rather than discussing this only in a narrow field of particle physics, concrete: “There are certain problems where we cannot cloister the scientific activity in the scientific world, and I think we really need to break the model. I wish CERN, when they had been discussing the risks, had done that in an open societal context, and not just within the CERN context.”

Presently CERN is holding its annual meeting in Chamonix to fix LHC’s 2012 schedules in order to increase luminosity by a factor of four for maybe finally finding the Higgs Boson – against a 100-Dollar bet of Stephen Hawking who is convinced of Micro Black Holes being observed instead, immediately decaying by hypothetical “Hawking Radiation” — with God Particle’s blessing. Then it would be himself gaining the Nobel Prize Hawking pointed out. Quite ironically, at Ars Electronica official T-Shirts were sold with the “typical signature” of a micro black hole decaying at the LHC – by a totally hypothetical process involving a bunch of unproven assumptions.

In 2013 CERN plans to adapt the LHC due to construction failures for up to CHF 1 Billion to run the “Big Bang Machine” at double the present energies. A neutral and multi-disciplinary risk assessment is still lacking, while a couple of scientists insist that their theories pointing at even global risks have not been invalidated. CERN’s last safety assurance comparing natural cosmic rays hitting the Earth with the LHC experiment is only valid under rather narrow viewpoints. The relatively young analyses of high energetic cosmic rays are based on indirect measurements and calculations. Sort, velocity, mass and origin of these particles are unknown. But, taking the relations for granted and calculating with the “assuring” figures given by CERN PR, within ten years of operation, the LHC under extreme and unprecedented artificial circumstances would produce as many high energetic particle collisions as occur in about 100.000 years in the entire atmosphere of the Earth. Just to illustrate the energetic potential of the gigantic facility: One LHC-beam, thinner than a hair, consisting of billions of protons, has got the power of an aircraft carrier moving at 12 knots.

This article in the Physics arXiv Blog (MIT’s Technology Review) reads: “Black Holes, Safety, and the LHC Upgrade — If the LHC is to be upgraded, safety should be a central part of the plans.”, closing with the claim: “What’s needed, of course, is for the safety of the LHC to be investigated by an independent team of scientists with a strong background in risk analysis but with no professional or financial links to CERN.”
http://www.technologyreview.com/blog/arxiv/27319/

Australian ethicist and risk researcher Mark Leggett concluded in a paper that CERN’s LSAG safety report on the LHC meets less than a fifth of the criteria of a modern risk assessment. There but for the grace of a goddamn particle? Probably not. Before pushing the LHC to its limits, CERN must be challenged by a really neutral, external and multi-disciplinary risk assessment.

Video recordings of the “Origin III” symposium at Ars Electronica:
Presentation Humberto Maturana:

Presentation Roger Malina:

“Origin” Symposia at Ars Electronica:
http://www.aec.at/origin/category/conferences/

Communication on LHC Safety directed to CERN
Feb 10 2012
For a neutral and multidisciplinary risk assessment to be done before any LHC upgrade
http://lhc-concern.info/?page_id=139

More info, links and transcripts of lectures at “LHC-Critique — Network for Safety at experimental sub-nuclear Reactors”:

www.LHC-concern.info

Lee Smolin is said to believe (according to personal communication from Danila Medvedev who was told about it by John Smart. I tried to reach Smolin for comments, but failed) that global catastrophe is impossible, based on the following reasoning: the multiverse is dominated by those universes that are able to replicate. This Self-replication occurs in black holes, and in especially in those black holes, which are created civilizations. Thus, the parameters of the universe are selected so that civilization cannot self-destruct before they create black holes. As a result, all physical processes, in which civilization may self-destruct, are closed or highly unlikely. Early version of Smolin’s argument is here: http://en.wikipedia.org/wiki/Lee_Smolin but this early version was refuted in 2004, and so he (probably) added existence of civilization as another condition for cosmic natural selection. Anyway, even if it is not Smolin’s real line of thoughts, it is quite possible line of thoughts.

I think this argument is not persuasive, since the selection can operate both in the direction of universes with more viable civilizations, and in the direction of universes with a larger number of civilizations, just as biological evolution works to more robust offspring in some species (mammals) and in the larger number of offspring with lower viability (plants, for example, dandelion). Since some parameters for the development of civilizations is extremely difficult to adjust by the basic laws of nature (for example, the chances of nuclear war or a hostile AI), but it is easy to adjust the number of emerging civilizations, it seems to me that the universes, if they replicated with the help of civilizations, will use the strategy of dandelions, but not the strategy of mammals. So it will create many unstable civilization and we are most likely one of them (self indication assumption also help us to think so – see recent post of Katja Grace http://meteuphoric.wordpress.com/2010/03/23/sia-doomsday-the-filter-is-ahead/)

But still some pressure can exist for the preservation of civilization. Namely, if an atomic bomb would be as easy to create as a dynamite – much easier then on Earth (which depends on the quantity of uranium and its chemical and nuclear properties, ie, is determined by the original basic laws of the universe), then the chances of the average survival of civilization would be lower. If Smolin’s hypothesis is correct, then we should encounter insurmountable difficulties in creating nano-robots, microelectronics, needed for strong AI, harmful experiments on accelerators with strangelet (except those that lead to the creation of black holes and new universes), and in several other potentially dangerous technology trends that depend on their success from the basic properties of the universe, which may manifest itself in the peculiarities of its chemistry.

In addition, the evolution of universes by Smolin leads to the fact that civilization should create a black hole as early as possible in the course of its history, leading to replication of universes, because the later it happens, the greater the chances that the civilization will self-destruct before it can create black holes. In addition, the civilization is not required to survive after the moment of “replication” (though survival may be useful for the replication, if civilization creates a lot of black holes during its long existence.) From these two points, it follows that we may underestimate the risks from Hadron Collider in the creation of black holes.

I would repeat: early creation of a black hole suggested by Smolin and destroying the parent civilization, is very consistent with the situation with the Hadron Collider. Collider is a very early opportunity for us to create a black hole, as compared with another opportunity — to become a super-civilization and learn how to connect stars, so that they collapse into black holes. It will take millions of years and the chances to live up to this stage is much smaller. Also collider created black holes may be special, which is requirement for civilization driven replication of universes. However, the creation of black holes in collider with a high probability means the death of our civilization (but not necessarily: black hole could grow extremely slowly in the bowels of the Earth, for example, millions of years, and we have time to leave the Earth, and it depends on the unknown physical conditions.) In doing so, black hole must have some feature that distinguishes it from other holes that arise in our universe, for example, a powerful magnetic field (which exist in collider) or a unique initial mass (also exist in LHC: they will collide ions of gold).

So Smolin’s logic is sound but not proving that our civilization is safe, but in fact proving quiet opposite: that the chances of extinction in near future is high. We are not obliged to participate in the replication of universes suggested by Smolin, if it ever happens, especially if it is tantamount to the death of the parent civilization. If we continue our lives without black holes, it does not change the total number of universes have arisen, as it is infinite.

A few months ago, my friend Benjamin Jakobus and I created an online “risk intelligence” test at http://www.projectionpoint.com/. It consists of fifty statements about science, history, geography, and so on, and your task is to say how likely you think it is that each of these statements is true. We calculate your risk intelligence quotient (RQ) on the basis of your estimates. So far, over 30,000 people have taken our test, and we’re currently writing up the results for some peer-reviewed journals.

Now we want to take things a step further, and see whether our measure correlates with the ability to make accurate estimates of future events. To this end we’ve created a “prediction game” at http://www.projectionpoint.com/prediction_game.php. The basic idea is the same; we provide you with a bunch of statements, and your task is to say how likely you think it is that each one is true. The difference is that these statements refer not to known facts, but to future events. Unlike the first test, nobody knows whether these statements are true or false yet. For most of them, we won’t know until the end of the year 2010.

For example, how likely do you think it is that this year will be the hottest on record? If you think this is very unlikely you might select the 10% category. If you think it is quite likely, but not very likely, you might put the chances at 60% or 70%. Selecting the 50% category would mean that you had no idea how likely it is.

This is ongoing research, so please feel free to comment, criticise or make suggestions.

Another risk is loss of human rationality, while preserving human life. In a society there are always so many people with limited cognitive abilities, and most of the achievements are made by a small number of talented people. Genetic and social degradation, reducing the level of education, loss of skills of logic can lead to a temporary decrease in intelligence of individual groups of people. But as long as humanity is very large in population, it is not so bad, because there will always be enough intelligent people. Significant drop in population after nonglobal disaster may exacerbate this problem. And the low intelligence of the remaining people will reduce their chances of survival. Of course, one can imagine such an absurd situation that people are so degraded that by the evolutionary path new species arise from us, which is not having a full-fledged intelligence — and that back then this kind of evolving reasonable, developed a new intelligence.
More dangerous is decline of intelligence because of the spread of technological contaminants (or use of a certain weapon). For example, I should mention constantly growing global arsenic contamination, which is used in various technological processes. Sergio Dani wrote about this in his article “Gold, coal and oil.” http://sosarsenic.blogspot.com/2009/11/gold-coal-and-oil-regulatory-crisis-of.html, http://www.medical-hypotheses.com/article/S0306-9877 (09) 00666-5/abstract
Disengaged during the extraction of gold mines in the arsenic remains in the biosphere for millennia. Dani binds arsenic with Alzheimer’s disease. In his another paper is demonstrated that increasing concentrations of arsenic leads to an exponential increase in incidence of Alzheimer’s disease. He believes that people are particularly vulnerable to arsenic poisoning, as they have large brains and longevity. If, however, according to Denis, in the course of evolution, people will adapt to high levels of arsenic, it will lead to a decline in the brain and life expectancy, resulting in the intellect of people will be lost.
In addition to arsenic contamination occurs among many other neurotoxic substances — CO, CO2, methane, benzene, dioxin, mercury, lead, etc. Although the level of pollution by each of them separately is below health standards, the sum of the impacts may be larger. One reason for the fall of the Roman Empire was called the total poisoning of its citizens (though not barbarians) of lead from water pipes. Of course, they could not have knowledge about these remote and unforeseen consequences — but we also may not know about the many consequences of our affairs.
In addition to dementia is alcohol and most drugs, many drugs (eg, side effect in the accompanying sheets of mixtures of heartburn called dementia). Also rigid ideological system, or memes.
Number of infections, particularly prion, also leads to dementia.
Despite this, the average IQ of people is growing as life expectancy.

AI is our best hope for long term survival. If we fail to create it, it will happened by some reason. Here I suggest the complete list of possible causes of failure, but I do not believe in them. (I was inspired bu V.Vinge artile “What if singularity does not happen”?)

I think most of these points are wrong and AI finaly will be created.

Technical reasons:
1) Moore’s Law will stop by physical causes earlier than would be established sufficiently powerful and inexpensive apparatus for artificial intelligence.
2) Silicon processors are less efficient than neurons to create artificial intelligence.
3) Solution of the AI cannot be algorithmically parallelization and as a result of the AI will be extremely slow.

Philosophy:
4) Human beings use some method of processing information, essentially inaccessible to algorithmic computers. So Penrose believes. (But we can use this method using bioengineering techniques.) Generally, the final recognition of the impossibility of creating artificial intelligence would be tantamount to recognizing the existence of the soul.
5) The system cannot create a system more complex then themselves, and so the people cannot create artificial intelligence, since all the proposed solutions are too simple. That is, AI is in principle possible, but people are too stupid to do it. In fact, one reason for past failures in the creation of artificial intelligence is that people underestimate the complexity of the problem.
6) AI is impossible, because any sufficiently complex system reveals the meaninglessness of existence and stops.
7) All possible ways to optimize are exhausted.AI does not have any fundamental advantage in comparison with the human-machine interface and has a limited scope of use.
8. The man in the body has a maximum level of common sense, and any incorporeal AIs are or ineffective, or are the models of people.
9) AI is created, but has no problems, which he could and should be addressed. All the problems have been solved by conventional methods, or proven uncomputable.
10) AI is created, but not capable of recursive self-optimization, since this would require some radically new ideas, but they had not. As a result, AI is there, or as a curiosity, or as a limited specific applications, such as automatic drivers.
11) The idea of artificial intelligence is flawed, because it has no precise definition or even it is an oxymoron, like “artificial natural.” As a result, developing specific goals or to create models of man, but not universal artificial intelligence.
12) There is an upper limit of the complexity of systems for which they have become chaotic and unstable, and it slightly exceeds the intellect of the most intelligent people. AI is slowly coming to this threshold of complexity.
13) The bearer of intelligence is Qualia. For our level of intelligence should be a lot events that are indescribable and not knowable, but superintellect should understand them, by definition, otherwise it is not superintellect, but simply a quick intellect.

Economic:
14) The growth of computer programs has led to an increase in the number of failures that were so spectacular that of automation software had to be abandoned. This led to a drop in demand for powerful computers and stop Moore’s Law, before it reached its physical limits. The same increase in complexity and number of failures made it difficult the creation of AI.
15) AI is possible, but it does not give a significant advantage over the man in any sense of the results, nor speed, nor the cost of computing. For example, a simulation of human worth one billion dollars, and she has no idea how a to self-optimize. But people found ways to break up their intellectual abilities by injecting the stem cell precursors of neurons, which further increases the competitive advantage of people.
16) No person engaged in the development of AI, because it is considered that this is impossible. It turns out self-fulfilling prophecy. AI is engaged only by fricks, who do not have enough of their own intellect and money. But the scale of the Manhattan Project could solve the problem of AI, but just no one is taking.
17) Technology of uploading consciousness into a computer has so developed, that this is enough for all practical purposes, have been associated with AI, and therefore there is no need to create an algorithmic AI. This upload is done mechanically, through scanning, and still no one understands what happens in the brain.

Political:
18) AI systems are prohibited or severely restricted for ethical reasons, so that people still feel themselves above all. Perhaps are allowed specialized AI systems in military and aerospace.
19) AI is prohibited for safety reasons, as it represents too great global risk.
20) AI emerged and established his authority over the Earth, but does not show itself, except it does not allow others to develop their own AI projects.
21) AI did not appear as was is imagined, and therefore no one call it AI (eg, the distributed intelligence of social networks).