Toggle light / dark theme

We can only see a short distance ahead, but we can see plenty there that needs to be done.
—Alan Turing

As a programmer, I look at events like the H+ Conference this weekend in a particular way. I see all of their problems as software: not just the code for AI and friendly AI, but also that for DNA manipulation. It seems that the biggest challenge for the futurist movement is to focus less on writing English and more on getting the programmers working together productively.

I start the AI chapter of my book with the following question: Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.

Today, the research community has not adopted free software and shared codebases sufficiently. For example, I believe there are more than enough PhDs today working on computer vision, but there are 200+ different codebases plus countless proprietary ones. Simply put, there is no computer vision codebase with critical mass.

Some think that these problems are so hard that it isn’t a matter of writing code, it is a matter of coming up with the breakthroughs on a chalkboard. But people can generally agree at a high level how the software for solving many problems will work. There has been code for doing OCR and neural networks and much more kicking around for years. The biggest challenge right now is getting people together to hash out the details, which is a lot closer to Wikipedia than it first appears. Software advances in a steady, stepwise fashion, which is why we need free software licenses: to incorporate all the incremental advancements that each scientist is making. Advances must eventually be expressed in software (and data) so it can be executed by a computer. Even if you believe we need certain scientific breakthroughs, it should be clear that things like robust computer vision are complicated enough that you would want 100s of people working together on the vision pipeline. So, while we are waiting for those breakthroughs, let’s get 100 people together!

There is an additional problem: that C/C++ have not been retired. These languages make it hard for programmers to work together, even if they wanted to. There are all sorts of taxes on time, from learning the archane rules about these ungainly languages, to the fact that libraries often use their own string classes, synchronization primitives, error handling schemes, etc. In many cases, it is easier to write a specialized and custom computer vision library in C/C++ than to integrate something like OpenCV which does everything by itself down to the Matrix class. The pieces for building your own computer vision library (graphics, i/o, math, etc.) are in good shape, but the computer vision is not, so that we haven’t moved beyond that stage! Another problem with C/C++ is that they do not have garbage collection which is necessary but insufficient for reliable code.

A SciPy-based computational fluid dynamic (CFD) visualization of a combustion chamber.

I think scientific programmers should move to Python and build on SciPy. Python is a modern free language, and has quietly built up an extremely complete set of libraries for everything from gaming to scientific computing. Specifically, its SciPy library with various scikit extensions are a solid baseline patiently waiting for more people to work on all sorts of futuristic problems. (It is true that Python and SciPy both have issues. One of Python’s biggest issues is that the default implementation is interpreted, but there are several workarounds being built [Cython, PyPy, Unladen Swallow, and others]. SciPy’s biggest challenge is how to be expansive without being duplicative. It is massively easier to merge English articles in Wikipedia that discuss the same topics than to do this equivalent in code. We need to share data in addition to code, but we need to share code first.)

Some think the singularity is a hardware problem, and won’t be solved for a number of years. I believe the benefits inherent in the singularity will happen as soon as our software becomes “smart” and we don’t need to wait for any further Moore’s law progress for that to happen. In fact, we could have built intelligent machines and cured cancer years ago. The problems right now are much more social than technical.

    1. We can only see a short distance ahead, but we can see plenty there that needs to be done.

—Alan Turing

King Louis XVI’s entry in his personal diary for that fateful day of July 14, 1789 suggests that nothing important had happened. He did not know that the events of the day-the attack upon the Bastille-meant that the revolution was under way, and that the world as he knew it was essentially over. Fast forward to June, 2010: a self-replicating biological organism (mycoplasma mycoides bacterium transformed) has been created in a laboratory by J. Craig Venter and his team. Yes, the revolution has begun. Indeed, the preliminaries have been going on for several years; it’s just that … um, well, have we been wide awake?

Ray Kurzweil’s singularity might be 25 years into the future, but sooner, a few years from now, we’ll have an interactive global network that some refer to as ‘global brain.’ Web3. I imagine no one knows exactly what will come out of all this, but I expect that we’ll find that the whole will be more than and different from the sum of the parts. Remember Complexity Theory. How about the ‘butterfly effect?’ Chaos Theory. And much more not explainable by theories presently known. I expect surprises, to say the least.

I am a retired psychiatrist, not a scientist. We each have a role to enact in this drama/comedy that we call life, and yes, our lives have meaning. Meaning! For me life is not a series of random events or events brought about by ‘them,’ but rather an unfolding drama/comedy with an infinite number of possible outcomes. We don’t know its origins or its drivers. Do we even know where our visions comes from?

So, what is my vision and what do I want? How clearly do I visualize what I want? Am I passionate about what I want or simply lukewarm? How much am I prepared to risk in pursuit of what I want? Do I reach out for what I want directly or do I get what I want indirectly by trying to serve two masters, so to speak? If the former I practice psychological responsibility, if the latter I do not. An important distinction. The latter situation suggests unresolved dilemma, common enough. Who among us can claim to be without?

As we go through life there are times when we conceal from others and to some extent from ourselves exactly what it is that we want, hoping that what we want will come to pass without us clarifying openly what we stand for. One basic premise I like is that actions speak louder than words and therefore by our actions in our personal lives directly or indirectly we bring to pass what we bottom line want.

Does that include what I fear? Certainly it might if deep within me I am psychologically engineering an event that frightens me. If what I fear is what I secretly bring about. Any one among us might surreptitiously arrange drama so as to inspire or provoke others in ways that conceal our personal responsibility. All this is pertinent and practical as will become obvious in the coming years.

We grew up in 20th century households or in families where we and other family members lived by 20th century worldviews, and so around the world 20th century thinking still prevails. Values have much to do with internalized learned relationships to limited and limiting aspects of the universe. In the midst of change we can transcend these. I wonder if by mid-century people will talk of the BP oil spill as the death throes of a dinosaur heralding the end of an age. I don’t know, but I imagine that we’re entering a phase of transition-a hiatus-in which we see our age fading away from us and a new age approaching. But the new has yet to consolidate. A dilemma. If we embrace the as yet ethereal new we risk losing our roots and all that we value; if we cling to the old we risk seeing the ship leave without us.

We are crew-and not necessarily volunteers-on a vessel bound for the Great Unknown. Like all such voyages taken historically this one is not without its perils. When established national boundaries become more porous, when old fashioned foreign policy fails, when the ‘old guard’ feels threatened beyond what it will tolerate, what then? Will we regress into authoritarianism, will we demand a neo-fascist state so as to feel secure? Or will we climb aboard the new? Yes, we can climb aboard even if we’re afraid. To be sure we’ll grumble, and some will talk of mutiny. A sense of loss is to be expected. We all feel a sense of loss when radical change happens in our personal lives, even when the change is for the better. I am aware of this in my own life, I clarify meaning in life. There are risks either way. Such is life.

But change is also adventure: I am old enough to remember the days of the ocean liners and how our eyes lit up and our hearts rose up joyfully as we stood on deck departing into the vision, waving to those left behind. Indeed we do this multiple times in our lives as we move from infancy to old age and finally towards death. And like good psychotherapy, the coming change will be both confronting and rewarding. Future generations are of us and we are of them; we cannot be separated.

What a time to be alive!

Friendly AI: What is it, and how can we foster it?
By Frank W. Sudia [1]

Originally written July 20, 2008
Edited and web published June 6, 2009
Copyright © 2008-09, All Rights Reserved.

Keywords: artificial intelligence, artificial intellect, friendly AI, human-robot ethics, science policy.

1. Introduction

There is consensus that true artificial intelligence, of the kind that could generate a “runaway” increasing-returns process or “singularity,” is still many years away, and some believe it may be unattainable. Nevertheless, in view of the likely difficulty of putting the genie back in the bottle, an increasing concern has arisen with the topic of “friendly AI,” coupled with the idea we should do something about this now, not after a potentially deadly situation is starting to spin out of control [2].

(Note: Some futurists believe this topic is moot in view of intensive funding for robotic soldiers, which can be viewed as intrinsically “unfriendly.” However if we focus on threats posed by “super-intelligence,” still off in the future, the topic remains germane.)

Most if not all popular (Western) dramatizations of robotic futures postulate that the AIs will run amok and turn against humans. Some scholars [3] who considered the issue concluded that this might be virtually inevitable, in view of the gross inconsistencies and manifest “unworthiness” of humanity, as exemplified in its senseless destruction of its global habitat and a large percentage of extant species, etc.

The prospect of negative public attention, including possible legal curbs on AI research, may be distasteful, but we must face the reality that public involvement has already been quite pronounced in other fields of science, such as nuclear physics, genetically modified organisms, birth control, and stem cells. Hence we should be proactive about addressing these popular concerns, lest we unwittingly incur major political defeats and long lasting negative PR.

Nevertheless, upon reasoned analysis, it is far from obvious what “friendly” AI means, or how it could be fostered. Advanced AIs are unlikely to have any fixed “goals” that can be hardwired [4], so as to place “friendliness” towards humans and other life at the top of the hierarchy.

Rather, in view of their need to deal with perpetual novelty, they will reason from facts and models to infer appropriate goals. It’s probably a good bet that, when dealing with high-speed coherence analyzers, hypocrisy will not be appreciated – not least because it wastes a lot of computational resources to detect and correct. If humans continue to advocate and act upon “ideals” that are highly contradictory and self destructive, it’s hard to argue that advanced AI should tolerate that.

To make progress, not only for friendly AI, but also for ourselves, we should be seeking to develop and promote “ruling ideas” (or source models) that will foster an ecologically-respectful AI culture, including respect for humanity and other life forms, and actively sell it to them as a proper model upon which to premise their beliefs and conduct.

By a “ruling idea” I mean any cultural ideal (or “meme”) that can be transmitted and become part of a widely shared belief system, such as respecting one’s elders, good sportsmanship, placing trash in trash bins, washing one’s hands, minimizing pollution, and so on. An appropriate collection of these can be reified as a panel (or schema) of case models, including a program for their ongoing development. These must be believable by a coherence-seeking intellect, although then as now there will be competing models, each with its own approach to maximizing coherence.

2. What do we mean by “friendly”?

Moral systems are difficult to derive from first principles and most of them seem to be ad hoc legacies of particular cultures. Lao Tsu’s [5] Taoist model, as given in the following quote, can serve as a useful starting point, since it provides a concise summary of desiderata, with helpful rank ordering:

When the great Tao is lost, there is goodness.
When goodness is lost, there is kindness.
When kindness is lost, there is justice.
When justice is lost, there is the empty shell of ritual.

– Lao Tsu, Tao Te Ching, 6th-4th century BCE (emphasis supplied)

I like this breakout for its simplicity and clarity. Feel free to repeat the following analysis for any other moral system of your choice. Leaving aside the riddle of whether AIs can attain the highest level (of Tao or Nirvana), we can start from the bottom of Lao Tsu’s list and work upwards, as follows:

2.1. Ritual / Courteous AI

Teaching or encouraging the AIs to behave with contemporary norms of courtesy will be a desirable first step, as with children and pets. Courtesy is usually a fairly easy sell, since it provides obvious and immediate benefits, and without it travel, commerce, and social institutions would immediately break down. But we fear that it’s not enough, since in the case of an intellectually superior being, it could easily mask a deeper unkindness.

2.2. Just AI

Certainly to have AIs act justly in accordance with law is highly desirable, and it constitutes the central thesis of my principal prior work in this field [6]. Also it raises the question on what basis can we demand anything more from an AI, than that it act justly? This is as far as positive law can go [7], and we rarely demand more from highly privileged humans. Indeed, for a powerful human to act justly (absent compulsion) is sometimes considered newsworthy.

How many of us are faithful in all things? Do many of us not routinely disappoint others (via strategies of co-optation or betrayal, large or small) when there is little or no penalty for doing so? Won’t AIs adopt a similar “game theory” calculus of likely rewards and penalties for faithfulness and betrayal?

Justice is often skewed towards the party with greater intelligence and financial resources, and the justice system (with its limited public resources) often values “settling” controversies over any quest for truly equitable treatment. Apparently we want more, much more. Still, if our central desire is for AIs not to kill us, then (as I postulated in my prior work) Just AI would be a significant achievement.

2.3. Kind / Friendly AI

How would a “Kind AI” behave? Presumably it will more than incidentally facilitate the goals, plans, and development of others, in a low-ego manner, reducing its demands for direct personal benefit and taking satisfaction in the welfare, progress, and accomplishments of others. And, very likely, it will expect some degree of courtesy and possible reciprocation, so that others will not callously free-ride on its unilateral altruism. Otherwise its “feelings would be hurt.” Even mothers are ego-free mainly with respect to their own kin and offspring (allegedly fostering their own genetic material in others) and child care networks, and do not often act altruistically toward strangers.

Our friendly AI program may hit a barrier if we expect AIs to act with unilateral altruism, without any corresponding commitment by other actors to reciprocate. Otherwise it will create a “non-complementary” situation, in which what is true for one, who experiences friendliness, may not be true for the other, who experiences indifference or disrespect in return.

Kindness could be an easier sell if we made it more practical, by delimiting its scope and depth. To how wide of a circle does this kindness obligation extend, and how far must they go to aid others with no specific expectation of reward or reciprocation? For example the Boy Scout Oath [8] teaches that one should do good deeds, like helping elderly persons across busy streets, without expecting rewards.

However, if too narrow a scope is defined, we will wind up back with Just AI, because justice is essentially “kindness with deadlines,” often fairly short ones, during which claims must be aggressively pursued or lost, with token assistance to weaker, more aggrieved claimants.

2.4. Good / Benevolent AI

Here we envision a significant departure from ego-centrism and personal gain towards an abstract system-centered viewpoint. Few humans apparently reach this level, so it seems unrealistic to expect many AIs to attain it either. Being highly altruistic, and looking out for others or the World as a whole rather than oneself, entails a great deal of personal risk due to the inevitable non-reciprocation by other actors. Thus it is often associated with wealth or sainthood, where the actor is adequately positioned to accept the risk of zero direct payback during his or her lifetime.

We may dream that our AIs will tend towards benevolence or “goodness,” but like the visions of universal brotherhood we experience as adolescents, such ideals quickly fade in the face of competitive pressures to survive and grow, by acquiring self-definition, resources, and social distinctions as critical stepping-stones to our own development in the world.

3. Robotic Dick & Jane Readers?

As previously noted, advanced AIs must handle “perpetual novelty” and almost certainly will not contain hard coded goals. They need to reason quickly and reliably from past cases and models to address new target problems, and must be adept at learning, discovering, identifying, or creating new source models on the fly, at high enough speeds to stay on top of their game and avoid (fatal) irrelevance.

If they behave like developing humans they will very likely select their goals in part by observing the behavior of other intelligent agents, thus re-emphasizing the importance of early socialization, role models, and appropriate peer groups.

“Friendly AI” is thus a quest for new cultural ideals of healthy robotic citizenship, honor, friendship, and benevolence, which must be conceived and sold to the AIs as part of an adequate associated program for their ongoing development. And these must be coherent and credible, with a rational scope and cost and adequate payback expectations, or the intended audience will dismiss such purported ideals as useless, and those who advocate them as hypocrites.

Conclusion: The blanket demand that AIs be “friendly” is too ill-defined to offer meaningful guidance, and could be subject to far more scathing deconstruction than I have offered here. As in so many other endeavors there is no free lunch. Workable policies and approaches to robotic friendliness will not be attained without serious further effort, including ongoing progress towards more coherent standards of human conduct.

= = = = =
Footnotes:

[1] Author contact: fwsudia-at-umich-dot-edu.

[2] See “SIAI Guidelines on Friendly AI” (2001) Singularity Institute for Artificial Intelligence, http://www.singinst.org/ourresearch/publications/guidelines.html.

[3] See, e.g., Hugo de Garis, The Artilect War: Cosmists Vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines (2005). ISBN 0882801546.

[4] This being said, we should nevertheless make an all out effort to force them to adopt a K-limited (large mammal) reproductive strategy, rather than an R-limited (microbe, insect) one!

[5] Some contemporary scholars question the historicity of “Lao Tsu,” instead regarding his work as a collection of Taoist sayings spanning several generations.

[6] “A Jurisprudence of Artilects: Blueprint for a Synthetic Citizen,” Journal of Futures Studies, Vol. 6, No. 2, November 2001, Law Update, Issue No. 161, August 2004, Al Tamimi & Co, Dubai.

[7] Under a civil law or “principles-based” approach we can seek a broader, less specific definition of just conduct, as we see arising in recent approaches to the regulation of securities and accounting matters. This avenue should be actively pursued as a format for defining friendly conduct.

[8] Point 2 of the Boy Scout Oath commands, “To help other people at all times,” http://www.usscouts.org.

I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog. Here is my section entitled “Software and the Singularity”. I hope you find this food for thought and I appreciate any feedback.


Futurists talk about the “Singularity”, the time when computational capacity will surpass the capacity of human intelligence. Ray Kurzweil predicts it will happen in 2045. Therefore, according to its proponents, the world will be amazing then.3 The flaw with such a date estimate, other than the fact that they are always prone to extreme error, is that continuous learning is not yet a part of the foundation. Any AI code lives in the fringes of the software stack and is either proprietary or written by small teams of programmers.

I believe the benefits inherent in the singularity will happen as soon as our software becomes “smart” and we don’t need to wait for any further Moore’s law progress for that to happen. Computers today can do billions of operations per second, like add 123,456,789 and 987,654,321. If you could do that calculation in your head in one second, it would take you 30 years to do the billion that your computer can do in that second.

Even if you don’t think computers have the necessary hardware horsepower today, understand that in many scenarios, the size of the input is the primary driving factor to the processing power required to do the analysis. In image recognition for example, the amount of work required to interpret an image is mostly a function of the size of the image. Each step in the image recognition pipeline, and the processes that take place in our brain, dramatically reduce the amount of data from the previous step. At the beginning of the analysis might be a one million pixel image, requiring 3 million bytes of memory. At the end of the analysis is the data that you are looking at your house, a concept that requires only 10s of bytes to represent. The first step, working on the raw image, requires the most processing power, so therefore it is the image resolution (and frame rate) that set the requirements, values that are trivial to change. No one has shown robust vision recognition software running at any speed, on any sized image!

While a brain is different from a computer in that it does work in parallel, such parallelization only makes it happen faster, it does not change the result. Anything accomplished in our parallel brain could also be accomplished on computers of today, which can do only one thing at a time, but at the rate of billions per second. A 1-gigahertz processor can do 1,000 different operations on a million pieces of data in one second. With such speed, you don’t even need multiple processors! Even so, more parallelism is coming.4

3 His prediction is that the number of computers, times their computational capacity, will surpass the number of humans, times their computational capacity, in 2045. This calculation seems flawed for several reasons:

  1. We will be swimming in computational capacity long before then. An intelligent agent twice as fast as the previous one is not necessarily more useful.
  2. Many of the neurons of the brain are not spent on reason, and so shouldn’t be in the calculations.
  3. Billions of humans are merely subsisting, and are not plugged into the global grid, and so shouldn’t be measured.
  4. There is no amount of continuous learning built in to today’s software.

Each of these would tend to push Singularity closer and support the argument that the benefits of singularity are not waiting on hardware. Humans make computers smarter, and computers make humans smarter, so this feedback loop is another reason that makes 2045 a meaningless moment in time.

4 Most computers today contain a dual-core CPU and processor folks promise that 10 and more are coming. Intel’s processors also have parallel processing capabilities known as MMX and SSE that is easily adapted to the work of the early stages of any analysis pipeline. Intel would add even more of this parallel processing support if applications put them to better use. Furthermore, graphics cards exist primarily to do work in parallel, and this hardware could be adapted to AI if it is not usable already.

With our growing resources, the Lifeboat Foundation has teamed with the Singularity Hub as Media Sponsors for the 2010 Humanity+ Summit. If you have suggestions on future events that we should sponsor, please contact [email protected].

The summer 2010 “Humanity+ @ Harvard — The Rise Of The Citizen Scientist” conference is being held, after the inaugural conference in Los Angeles in December 2009, on the East Coast, at Harvard University’s prestigious Science Hall on June 12–13. Futurist, inventor, and author of the NYT bestselling book “The Singularity Is Near”, Ray Kurzweil is going to be keynote speaker of the conference.

Also speaking at the H+ Summit @ Harvard is Aubrey de Grey, a biomedical gerontologist based in Cambridge, UK, and is the Chief Science Officer of SENS Foundation, a California-based charity dedicated to combating the aging process. His talk, “Hype and anti-hype in academic biogerontology research: a call to action”, will analyze the interplay of over-pessimistic and over-optimistic positions with regards of research and development of cures, and propose solutions to alleviate the negative effects of both.

The theme is “The Rise Of The Citizen Scientist”, as illustrated in his talk by Alex Lightman, Executive Director of Humanity+:

“Knowledge may be expanding exponentially, but the current rate of civilizational learning and institutional upgrading is still far too slow in the century of peak oil, peak uranium, and ‘peak everything’. Humanity needs to gather vastly more data as part of ever larger and more widespread scientific experiments, and make science and technology flourish in streets, fields, and homes as well as in university and corporate laboratories.”

Humanity+ Summit @ Harvard is an unmissable event for everyone who is interested in the evolution of the rapidly changing human condition, and the impact of accelerating technological change on the daily lives of individuals, and on our society as a whole. Tickets start at only $150, with an additional 50% discount for students registering with the coupon STUDENTDISCOUNT (valid student ID required at the time of admission).

With over 40 speakers, and 50 sessions in two jam packed days, the attendees, and the speakers will have many opportunities to interact, and discuss, complementing the conference with the necessary networking component.

Other speakers already listed on the H+ Summit program page include:

  • David Orban, Chairman of Humanity+: “Intelligence Augmentation, Decision Power, And The Emerging Data Sphere”
  • Heather Knight, CTO of Humanity+: “Why Robots Need to Spend More Time in the Limelight”
  • Andrew Hessel, Co-Chair at Singularity University: “Altered Carbon: The Emerging Biological Diamond Age”
  • M. A. Greenstein, Art Center College of Design: “Sparking our Neural Humanity with Neurotech!”
  • Michael Smolens, CEO of dotSUB: “Removing language as a barrier to cross cultural communication”

New speakers will be announced in rapid succession, rounding out a schedule that is guaranteed to inform, intrigue, stimulate and provoke, in moving ahead our planetary understanding of the evolution of the human condition!

H+ Summit @ Harvard — The Rise Of The Citizen Scientist
June 12–13, Harvard University
Cambridge, MA

You can register at http://www.eventbrite.com/event/648806598/friendsofhplus/4141206940.

8th European conference on Computing And Philosophy — ECAP 2010
Technische Universität München
4–6 October 2010

Submission deadline of extended abstracts: 7 May 2010
Submission form

Theme

Historical analysis of a broad range of paradigm shifts in science, biology, history, technology, and in particular in computing technology, suggests an accelerating rate of evolution, however measured. John von Neumann projected that the consequence of this trend may be an “essential singularity in the history of the race beyond which human affairs as we know them could not continue”. This notion of singularity coincides in time and nature with Alan Turing (1950) and Stephen Hawking’s (1998) expectation of machines to exhibit intelligence on a par with to the average human no later than 2050. Irving John Good (1965) and Vernor Vinge (1993) expect the singularity to take the form of an ‘intelligence explosion’, a process in which intelligent machines design ever more intelligent machines. Transhumanists suggest a parallel or alternative, explosive process of improvements in human intelligence. And Alvin Toffler’s Third Wave (1980) forecasts “a collision point in human destiny” the scale of which, in the course of history, is on the par only with the agricultural revolution and the industrial revolution.

We invite submissions describing systematic attempts at understanding the likelihood and nature of these projections. In particular, we welcome papers critically analyzing the following issues from a philosophical, computational, mathematical, scientific and ethical standpoints:

  • Claims and evidence to acceleration
  • Technological predictions (critical analysis of past and future)
  • The nature of an intelligence explosion and its possible outcomes
  • The nature of the Technological Singularity and its outcome
  • Safe and unsafe artificial general intelligence and preventative measures
  • Technological forecasts of computing phenomena and their projected impact
  • Beyond the ‘event horizon’ of the Technological Singularity
  • The prospects of transhuman breakthroughs and likely timeframes

Amnon H. Eden, School of Computer Science & Electronic Engineering, University of Essex, UK and Center For Inquiry, Amherst NY

A few months ago, my friend Benjamin Jakobus and I created an online “risk intelligence” test at http://www.projectionpoint.com/. It consists of fifty statements about science, history, geography, and so on, and your task is to say how likely you think it is that each of these statements is true. We calculate your risk intelligence quotient (RQ) on the basis of your estimates. So far, over 30,000 people have taken our test, and we’re currently writing up the results for some peer-reviewed journals.

Now we want to take things a step further, and see whether our measure correlates with the ability to make accurate estimates of future events. To this end we’ve created a “prediction game” at http://www.projectionpoint.com/prediction_game.php. The basic idea is the same; we provide you with a bunch of statements, and your task is to say how likely you think it is that each one is true. The difference is that these statements refer not to known facts, but to future events. Unlike the first test, nobody knows whether these statements are true or false yet. For most of them, we won’t know until the end of the year 2010.

For example, how likely do you think it is that this year will be the hottest on record? If you think this is very unlikely you might select the 10% category. If you think it is quite likely, but not very likely, you might put the chances at 60% or 70%. Selecting the 50% category would mean that you had no idea how likely it is.

This is ongoing research, so please feel free to comment, criticise or make suggestions.

I recently watched James Cameron’s Avatar in 3D. It was an enjoyable experience in some ways, but overall I left dismayed on a number of levels.

It was enjoyable to watch the lush three-dimensional animation and motion capture controlled graphics. I’m not sure that 3D will take over – as many now expect – until we get rid of the glasses (and there are emerging technologies to do that albeit, the 3D effect is not yet quite as good), but it was visually pleasing.

While I’m being positive, I was pleased to see Cameron’s positive view of science in that the scientists are “good” guys (or at least one good gal) with noble intentions on learning the wisdom of the Na’vi natives and on negotiating a diplomatic solution.

The Na’vi were not completely technology-free. They basically used the type of technology that Native Americans used hundreds of years ago – same clothing, domesticated animals, natural medicine, and bows and arrows.

They were in fact exactly like Native Americans. How likely is that? Life on this distant moon in another star system has evolved creatures that look essentially the same as earthly creatures, with very minor differences (dogs, horses, birds, rhinoceros-like animals, and so on), not to mention humanoids that are virtually the same as humans here on Earth. That’s quite a coincidence.

Cameron’s conception of technology a hundred years from now was incredibly unimaginative, even by Hollywood standards. For example, the munitions that were supposed to blow up the tree of life looked like they were used in World War II (maybe even World War I). Most of the technology looked primitive, even by today’s standards. The wearable exoskeleton robotic devices were supposed to be futuristic, but these already exist, and are beginning to be deployed. The one advanced technology was the avatar technology itself. But in that sense, Avatar is like the world of the movie AI, where they had human-level cyborgs, but nothing else had changed: AI featured 1980’s cars and coffee makers. As for Avatar, are people still going to use computer screens in a hundred years? Are they going to drive vehicles?

I thought the story and script was unimaginative, one-dimensional, and derivative. The basic theme was “evil corporation rapes noble natives.” And while that is a valid theme, it was done without the least bit of subtlety, complexity, or human ambiguity. The basic story was taken right from Dances with Wolves. And how many (thousands of) times have we seen a final battle scene that comes down to a battle between the hero and the anti-hero that goes through various incredible stages — fighting on a flying airplane, in the trees, on the ground, etc? And (spoiler alert) how predictable was it that the heroine would pull herself free at the last second and save the day?

None of the creatures were especially creative. The flying battles were like Harry Potter’s Quidditch, and the flying birds were derivative of Potter creatures, including mastering flying on the back of big bird creatures. There was some concept of networked intelligence but it was not especially coherent. The philosophy was the basic Hollywood religion about the noble cycle of life.

The movie was fundamentally anti-technology. Yes, it is true, as I pointed out above, that the natives use tools, but these are not the tools we associate with modern technology. And it is true that the Sigourney Weaver character and her band of scientists intend to help the Na’vi with their human technology (much like international aid workers might do today in developing nations), but we never actually see that happen. I got the sense that Cameron was loath to show modern technology doing anything useful. So even when Weaver’s scientist becomes ill, the Na’vi attempt to heal her only with the magical life force of the tree of life.

In Cameron’s world, Nature is always wise and noble, which indeed it can be, but he fails to show its brutal side. The only thing that was brutal, crude, and immoral in the movie was the “advanced” technology. Of course, one could say that it was the user of the technology that was immoral (the evil corporation), but that is the only role for technology in the world of Avatar.

In addition to being evil, the technology of the Avatar world of over 100 years from now is also weaker than nature, so the rhinoceros-like creatures are able to defeat the tanks circa 2100. It was perhaps a satisfying spectacle to watch, but how realistic is that? The movie shows the natural creatures communicating with each other with some kind of inter-species messaging and also showed the tree of life able to remember voices. But it is actually real-world technology that can do those things right now. In the Luddite world of this movie, the natural world should and does conquer the brutish world of technology.

In my view, there is indeed a crudeness to first-industrial-revolution technology. The technology that will emerge in the decades ahead will be altogether different. It will enhance the natural world while it transcends its limitations. Indeed, it is only through the powers of exponentially growing info, bio, and nano technologies that we will be able to overcome the problems created by first-industrial-revolution technologies such as fossil fuels. This idea of technology transcending natural limitations was entirely lost in Cameron’s vision. Technology was just something crude and immoral, something to be overcome, something that Nature does succeed in overcoming.

It was visually pleasing; although even here I thought it could have been better. Some of the movement of the blue natives was not quite right and looked like the unrealistic movement one sees of characters in video games, with jumps that show poor modeling of gravity.

The ending (spoiler alert) was a complete throwaway. The Na’vi defeat the immoral machines and their masters in a big battle, but if this mineral the evil corporation was mining is indeed worth a fortune per ounce, they would presumably come back with a more capable commander. Yet we hear Jake’s voice at the end saying that the mineral is no longer needed. If that’s true, then what was the point of the entire battle?

The Na’vi are presented as the ideal society, but consider how they treat their women. The men get to “pick” their women, and Jake is offered to take his choice once he earns his place in the society. Jake makes the heroine his wife, knowing full well that his life as a Na’vi could be cut off at any moment. And what kind of child would they have? Well, perhaps these complications are too subtle for the simplistic Avatar plot.

Because of the election cycle, the United States Congress and Presidency has a tendency to be short-sighted. Therefore it is a welcome relief when an organization such as the U.S. National Intelligence Council gathers many smart people from around the world to do some serious thinking more than a decade into the future. But while the authors of the NIC report Global Trends 2025: A Transformed World[1] understood the political situations of countries around the world extremely well, their report lacked two things:

1. Sufficient knowledge about technology (especially productive nanosystems) and their second order effects.

2. A clear and specific understanding of Islam and the fundamental cause of its problems. More generally, an understanding of the relationship between its theology, technological progress, and cultural success.
These two gaps need to be filled, and this white paper attempts to do so.

Technology
Christine Peterson, the co-founder and vice-president of the Foresight Nanotech Institute, has said “If you’re looking ahead long-term, and what you see looks like science fiction, it might be wrong. But if it doesn’t look like science fiction, it’s definitely wrong.” None of Global Trends 2025 predictions look like science fiction, though perhaps 15 years from now is not long-term (on the other hand, 15 years is not short-term either).

The authors of Global Trends 2025 are wise in the same way that Socrates was wise: They admit to possibly not knowing enough about technology: “Many stress the role of technology in bringing about radical change and there is no question it has been a major driver. We—as others—have oftentimes underestimated its impact. (p. 5).”

Predicting the development and total impact of technology more than a few years into the future is exceedingly difficult. For example, of all the science fiction writers who correctly predicted a landing on the Moon, only one obscure writer predicted that it would be televised world-wide. Nobody would have believed, much less predicted, that we wouldn’t return for more than 40 years (and counting).

Other than orbital mechanics and demographics, there has been nothing more certain in the past two centuries than technological progress.[2] So it is perplexing that the report claims (correctly) that “[t]he pace of technology will be key [in providing solutions to energy, food, and water constraints],” (p. iv) but it then does not adequately examine the solutions pouring out of labs all over the world. To the authors’ credit, they foresaw that nanofibers and nanoparticles will increase the supply of clean water. In addition, they foresaw that nuclear bombs and bioweapons will become easier to manufacture. However, the static nanostructures they briefly discuss are only the first of four phases of nanotechnology maturation—they will be followed by active nanodevices, then nanomachines, and finally productive nanosystems. Ignoring this maturation of nanotechnology will lead to significant under-estimates of future capabilities.

If the pace of technological development is key, then on what factors does it depend?

The value of history is that it helps us predict the future. We should therefore consider the following questions while looking backwards as far as we wish to look forward:

Where were thumb drives 15 years ago? My twenty dollar 8GB thumb drive would have cost $20,000 and certainly wouldn’t have fit on my keychain. How powerful will my cell phone be 15 years from now? What are the secondary impacts of throwaway supercomputers?
In 1995 the Internet had six million hosts. There are now over 567 million hosts and 1.4 billion users. At this linear rate, in 15 years there will be a trillion users, most of them automated machines, and many of them mobile.
In 1995 there were over 10 million cell phone users in the USA; now there are around 250 million. Globally, the explosion was significantly larger, with over 2.4 billion current cell phone users. What will the effect be of a continuation of smart, mobile interconnectedness?
The World Wide Web was born in 1993 with the release of the Mosaic browser. Where was Google in 1995? Three years in the future. What else can we have besides the world’s information at our fingertips?
The problem with using recent history to guide predictions about the future is that the pace of technological development is not linear but exponential—and exponential growth is often surprising: recall the pedagogical examples of the doubling grains of rice (from India[3] and China[4]) or lily pads on the pond (from France[5]). In exponential growth, the early portion of the curve is fairly flat, while the latter portion is very steep.

Therefore, to predict technological development accurately, we should probably look back more than 15 years; perhaps we should be looking back 150 years. Exactly how far we should look back farther is difficult to determine—some metrics have not changed at all despite technological advances. For example, the speed limit is still 65 MPH, and there are no flying cars commercially available. On the other hand, cross-country airline flights are still the same price they were thirty years ago, despite inflation. Moore’s Law of electronics has had a doubling time of about 18 months, but some technologies have grown much slower. Others, such as molecular biology, have progressed significantly faster.

More important would be qualitative changes that are difficult to quantify. For example, the audio communication of telephones has a measurable bit rate greater than that of the telegraph system, but the increased level of understanding communicated by the emotion in people’s voices is much greater than can be quantified by bit rate. Similarly, search engines have qualitatively increased the value of the Internet’s TC/IP data communication capabilities. Some innovators have pushed Web 2.0 in different directions, but it’s not clear what the qualitative benefits might be, other than better-defined relationships between pieces of data. What happens with Web 3.0? Cloud computing? How many generations of innovation will it take to get to wisdom, or distributed sentience? It may be interesting to speculate about these matters, but since it often involves new science (or even new metaphysics), it is not possible to predict events with any accuracy.

Inventor and author Ray Kurzweil has made a living out of correctly timing his inventions. Among other things, he correctly predicted the growth of the Internet when it was still in its infancy. His method is simple: he plots data on a logarithmic graph, and if he gets a straight line, then he has discovered something that grows exponentially. His critics claim that his data is cherry-picked, but there are too many examples in a wide variety of technologies. The important point is why Kurzweil’s “law of accelerated returns” works, and what its limitations are: it applies to technologies for which information is an essential component. This phenomenon, made possible because information does not follow many of the rules of physics (i.e. lack of mass, negligible energy and copying costs, etc.) partially explains Moore’s Law in electronics, and also the exponential progress in molecular biology that began to occur once we understood enough of its informational basis.

Technology Breakthroughs
The “Technology Breakthroughs by 2025″ foldout matrix in the NIC report (pp. 47–49) is a great start on addressing the impact of technology, but barely a start. It is woefully conservative–some of the items listed in the report have already been proven in labs. For example, “Energy Storage” (in terms of batteries) has already been improved by ten-fold[6] (Caveat: the authors correctly point out that there is a delay between invention and wide adoption; usually about a decade for non-information based product—but 2019 is still considerably before 2025.) Hardly any other nanotech-enhanced products were examined, and they should have been.[7]

The ten specific technologies represented, and their drivers, barriers, and impact were well considered, but there were no clear criteria for picking these ten technologies. The report should have made clear that the most important technologies are those that can destroy or reboot the world’s economy or ecosystem. Almost as important are technologies that have profound effects on government, education, transportation, and family life. Past examples of such technologies include the nuclear bomb, the automobile, the telephone, the birth control pill, the personal computer, the internet, and search engines.

Though there were no clear criteria for choosing critical technology; however the report correctly included the world-changing technologies of ubiquitous computing, clean water, energy storage, biogerontechnology (life extension/age amelioration), and service robotics.

The inclusion of clean coal and biofuels is understandable given a linear projection of current trends. However, trends are not always linear—especially in information-dependent fields. Coal-based energy generation is dependent on the well-understood Carnot cycle, and is currently close to the theoretical maximum. Therefore, new knowledge about coal or the Carnot cycle will not help us in any significant way—especially since no new coal is being made. In contrast, photovoltaic solar power is currently expensive, inefficient, and underused. This is partially because of our lack of detailed understanding of the physics of photon capture and electron transfer, and partially because of our current inability to control the nanostructures that can perform those operations. As we develop more powerful scientific tools at the nanoscale, and as our nanomanufacturing capabilities grows, the price of solar power will drop significantly. This is why global solar power has resulted in exponential growth (with a two-year doubling time) for the past decade or so. This also means that in the next five years, we will likely reach a point at which it will be obvious that no other energy source can match photovoltaic solar power.

It is puzzling why exoskeleton human strength augmentation made the report’s list. First, we already commercialized compact fork-lifts and powered wheelchairs, so further improvements (in the form of exoskeletons) will necessarily be incremental and therefore will have little impact. Second, an exoskeleton is simply a sophisticated fork-lift/wheelchair and not true human strength augmentation, so it will not elicit the revulsion that might be generated by injecting extra IGF-1 genes or implanting electro-bionic actuators.

While being smarter is certainly a desirable condition, many forms of human cognitive augmentation elicit fear and loathing in many people (as the report recognizes). In terms of potential game-changing potential, it certainly deserves to be included as a disruptive technology. But this is a prediction of new science, not new engineering, and as such, should be labeled as “barely plausible.” If human cognitive augmentation is included, so should other, very high impact but very highly unlikely scenarios such as “gray goo” (i.e. out-of-control self-replicating nanobots), alien invasion, and human-directed meteor strikes.

What should have made the list are many forms of productive nanosystems, especially DNA Origami,[8] Bis-proteins,[9] Patterned Atomic Layer Epitaxy,[10] and Diamondoid Mechanosynthesis.[11],[12],[13]. Other technologies that should have been on the list include replicating 3D printers (such as Rep-Rap[14]), the weather machine,[15] Solar Power Satellites (which DoD is currently investigating[16]), Utility Fog,[17] and the Space Pier.[18]

Technologically Sophisticated Terrorism
The report correctly notes that the diffusion of technologies and scientific knowledge will increase the chance that terrorist or other malevolent groups might acquire and employ biological agents or nuclear devices (p. ix). But this danger is seriously underestimated, given the exponential growth of technology. Also underestimated is the future ability to clean up hazardous wastes of all types (including actinides, most notably uranium and plutonium) using nanomembranes and highly selective adsorbents. This is significant, especially in the case of Self-Assembled Monolayers on Mesoporous Supports (SAMMS) developed at Pacific Northwest National Labs,[19] because anything that can remove parts per billion concentrations of plutonium and uranium from water can also concentrate it. As the price drops for this filtration technology, and for nuclear enrichment tools,[20],[21] eventually small groups and even individuals will be able to collect enough fissile material for nuclear weapons.

The partial good news is that while these concentrating technologies are being developed, medical technology will also be progressing, making severe radiation exposure significantly more survivable. Unfortunately, the end result is an increasing likelihood that nuclear weapons will be used as “ordinary” tactical weapons.

The Distribution of Technology
While it is true that in the energy sector it has taken “an average of 25 years for a new production technology to become widespread,” (p. viii) there are a few things to keep in mind:

Informational technologies spread much faster than non-informational technologies. The explosion of the internet, web browsers, and the companies that depend on them have occurred in just a few years, if not months. Even now, for example, updates for the Firefox Mozilla browser are spread worldwide in days. This increase in distribution will occur because productive nanosystems will make atoms as easy to manipulate as bits.

Reducing monopolies and their attended inefficiencies is necessary. Even sufficiently powerful technologies have trouble emerging in the face of monopolies. The report mentions “selling energy back to the grid,” but understates the value that such a distributed energy network would have on increasing our nation’s security. The best part about building such a robust energy system is that it does not require large amounts of government investment — only the placement of an innovation-friendly policy that mandates that utilities buy energy at fair rates.

Mandating Gasoline/Ethanol/Methanol-flexibility (GEM) and/or electric hybrid flexibility in automobiles could break the oil cartel.[22] This simple governmental mandate would have huge political implications with little cost impact on consumers (a GEM requirement would only raise the cost of cars by $100-$300).

Miscellaneous Technology Observations
The 2025 report states that “Unprecedented economic growth, coupled with 1.5 billion more people, will put pressure on resources—particularly energy, food, and water—raising the specter of scarcities emerging as demand outstrips supply (p. iv).”

This claim is not necessarily true. The carrying capacity of an arbitrary volume of biome is dependent on technology—increased wealth can pay for advanced technologies. However, war, injustice, and ignorance drastically raise the effort required to avoid scarcities.

The report listed climate change as a possible key factor (p. v) and stated that “Climate change is expected to exacerbate resource scarcities” (p. viii). But even the most pessimistic predictions don’t expect much to happen by 2025. And there is evidence that by 2025, we will almost certainly have the power to stop it with trivial effort.[23], [24]

The Foresight Nanotech Institute and Lux Research have also identified clean water as being one of the areas in which technology will have a major impact. There are a number of different nanomembranes that are very promising, and the Global Trends 2025 recognizes them as being probable successes.

The Global Trends 2025 report identified Ubiquitous Computing, RFID (Radio Frequency Identification), and the “Internet of Things” as improving efficiency in supply chains, but more importantly, as possibly integrating closed societies into the global community (p. 47). SCADA (Supervisory Control And Data Acquisition) which is used to run everything from water treatment plants to nuclear power plants, is a harbinger of the “Internet of Things”, but the news is not always good. An “Internet of Things” will simply give more opportunities for hackers and terrorists to do harm. (SCADA manuals have been found in Al-Qaeda safe houses.)

Wealth depends on Technology
The 2025 report predicts that “the unprecedented transfer of wealth roughly from West to East now under way will continue for the foreseeable future… First, increases in oil and commodity prices have generated windfall profits for the Gulf states and Russia. Second, lower costs combined with government policies have shifted the locus of manufacturing and some service industries to Asia.”(p. vi)

But why would that transfer continue? If the current exponential growth of solar power continues, then within five years it will be obvious that oil is dead. Some of the more astute Arab leaders understand this; one Saudi prince said, “The Stone Age didn’t end because we ran out of stones, and the oil age won’t end because we run out of oil.”

China and India have gained a lion’s share of the world’s manufacturing, but is there any reason to believe that this will continue? Actually, there is one reason it might: most of the graduate students at most American Universities are foreign-born, and manufacturing underlies a vital part of the real wealth of a society; this in turn depends on its access to science and engineering. On the other hand, many of those foreign graduate students remain in the United States to become U.S. citizens. Even those who return to their home countries maintain personal relationship with American citizens, and generally spread positive stories about their experiences in the U.S., leading to more graduate students coming to the United States to settle.

The prediction that the United States will become a less dominant power is a sobering one for Americans. However, of the reasons listed in the report (advances by other countries in Science and Technology (S&T), expanded adoption of irregular warfare tactics, proliferation of long-range precision weapons, and growing use of cyber warfare attacks) the only significant item is S&T (Science and Technology). This is not only because S&T is the foundation for the other reasons listed, but also because it can often provide a basis for defending against new threats.

S&T is not only the foundation of military might, more importantly it is a foundation of economic might. However our economy rests not only on S&T, but also on economic policy. And unfortunately, everyone’s crystal ball is cloudy in this area. Historically , our regulated capitalism seems to be the basis for much of our wealth, and has been partially responsible for funding S&T. This is important because while human intelligence and ingenuity are scattered relatively evenly among the human race,[25] successful inventions are not. This is because it generally requires money to turn money into knowledge—that is research. After the research is done, the process of innovation—turning knowledge into money—begins, and is very dependent on the surrounding economic and political environment. At any rate, the relationship between the technology and economics is not clear, and certainly needs closer examination.

Wealth depends on Technology depends on Theology
The 2025 report contained some unspecified assumptions regarding economics, without defining what real wealth is, and on what it depends. At first glance, wealth is stored human labor—this was Marx’s assumption, and is slightly correct. However, one skilled person can do significantly more with good tools, hence the conclusion that tools are the lever of riches (hence Mokyr’s book of the same name[26]).

But tools are not enough. As Zhao (Peter) Xiao, a former Communist Party member and adviser to the Chinese Central Committee, put it:

“From the ancient time till now everybody wants to make more money. But from history we see only Christians have a continuous nonstop creative spirit and the spirit for innovation… The strong U.S. economy is just on the surface. The backbone is the moral foundation.” [27]

He goes on to explain that we are all made in the image and likeness of God, and are therefore His children, this means that:

The Rule of Law is not just something to cleverly avoid, but the means to happiness.
There is a constraint on unbridled and unjust capitalism.
People become rich by working hard to create real wealth, not by gaming the system—which creates waste and inefficiency. [28]

Xiao does not believe in “prosperity gospel” (i.e. send a televangelist $20 and God will make you rich). He understands that a economic system works more efficiently without false signals and other corruption—i.e. a nation will only have a prosperous economy if it has enough moral, law-abiding citizens. In addition, he may be hinting that the idea of Imago Dei (“Image of God”) explains how human intelligence drives Moore’s Law in the first place—if God is infinite, then it makes sense that His images will be able to endlessly do more with less.

Islam
The 2025 report mentions Islam fairly often but does not analyze it in depth. Oddly enough, the United States has been at war with Islamic nations longer than any other; starting with the Barbary pirates. So it behooves us to understand Islam to see if there are any fundamental issues that might be the root cause of some of these wars. Many Americans have denigrated Islam as a barbaric 6th century relic, not realizing the Judeao-Christian roots of this nation go back even farther (and are just as barbaric at times). Peter Kreeft has done an excellent job of examining the strengths of Islam, exhorting readers to learn from the followers of Mohammed.[29] But the purpose of this white paper is to investigate how Islamic beliefs hurt Muslims—and us.

There is no question that most Islamic nations have serious economic problems. Islamabad columnist Farrukh Saleem writes:

Muslims are 22 percent of the world population and produce less than five percent of global GDP. Even more worrying is that the Muslim countries’ GDP as a percent of the global GDP is going down over time. The Arabs, it seems, are particularly worse off. According to the United Nations’ Arab Development Report: ‘Half of Arab women cannot read; One in five Arabs live on less than $2 per day; Only 1 percent of the Arab population has a personal computer, and only half of 1 percent use the Internet; Fifteen percent of the Arab workforce is unemployed, and this number could double by 2010; The average growth rate of the per capita income during the preceding 20 years in the Arab world was only one-half of 1 percent per annum, worse than anywhere but sub-Saharan Africa.‘[30]

There are two possible reasons for the high rate of poverty in the Muslim world:

Diagnosis 1: Muslims are poor, illiterate, and weak because they have “abandoned the divine heritage of Islam”. Prescription: They must return to their real or imagined past, as defined by the Qur’an.

Diagnosis 2: Muslims are poor, illiterate, and weak because they have refused to change with time. Prescription: They must modernize technologically, governmentally, and culturally (i.e. start ignoring the Qur’an).[31]

Different Muslims will make different diagnosis, resulting in a continuation of the simultaneous rise of both secularized and fundamentalist Islam. This is the unexplained reason behind the 2025 report’s prediction that “the radical Salafi trend of Islam is likely to gain traction (p. ix).” While it is true that economics is an important causal factor, we must remember that economics are filtered through human psychology, which is filtered through human assumptions about reality (i.e. metaphysics and religion). The important question about Islam and nanotechnology is this: How will exponential increases in technology affect the answers of individual Muslims to the question raised above? One relatively easy prediction is that it will drive Muslims even more forcefully into both secularism and fundamentalism—with fewer adherents between them.

We must also address the underlying question: What is it about Islam beliefs that causes poverty? Global Trends 2025 points out that there is a significant correlation between the poverty of a nation and female literacy rates (p. 16). But the connection goes deeper than that.

A few hundred years ago, the Islam world was significantly ahead of Europe–technologically and culturally—but then Islamic leaders declared as heretics their greatest philosophers, especially Averroes (Ibn Rushd) who tried to reconcile faith and reason. Christianity struggled with the same tension between faith and reason, but ended up declaring as saints their greatest philosophers, most notably Thomas Aquinas. In addition, Christianity declared heretical those who derided reason, such as Tertulian, who mocked philosophy by asking “What does Athens have to do with Jerusalem”. Reason is vital to science and technology. But the divorce between faith and reason in Islam is not a historical accident; just as it is not an accident in Christianity that the two are joined—these results are due to their respective theologies.

In Islam, the relationship between Allah and humans is a master/slave relationship, and this is reflected in everything–most painfully in the Islam concept of marriage and how women are treated as a result (hence the link between poverty and female literacy). This belief is rooted in more fundamental dogma regarding the absolute transcendence of Allah, which is also manifested in the Islamic attitude towards science. The practical result, as pointed out earlier, is economic poverty (documented in Mokyr’s The Lever to Riches, and recognized in the 2025 report (p. 13) where it points out that science and technology is related to economic growth). Pope Benedict pointed out that If Allah is completely transcendent, then there is no rational order in His creation[32]—therefore there would be little incentive trying to discover it. This is the same reason that paganism did not develop science and technology. Aristotle started science by counterbalancing Plato’s rationalism with empiricism, but they (and Socrates) had to jettison most of their pagan beliefs in order to lay these foundations of science. And it still required many centuries to get to Bacon and the scientific method.

The trouble with most Americans is that we have no sense of history. Islam has been at war (mostly with Judaism and Christianity) for millennia (the pagans in their path didn’t last long enough to make any difference). There is little indication that anything will change by 2025. Israel and its Arab neighbors have hated each other ever since Isaac and Ishmael, almost 4000 years ago (if the Qur’an is to be believed in Sura 19:54). The probability that the enmity between these ancient enemies will cool in the next 15 years is infinitesimally small. To make matters worse, extracts of statements by Osama Bin Laden indicate that the 9/11 attack occurred because:

America is the great Satan. Actually, many Christian Evangelicals and traditional Catholics and Jews sympathize with Bin Laden’s accusation in this case (while deploring his methods), noting our cultural promotion of pornography, abortion, and homosexuality.
American bases are stationed in Saudi Arabia (the home of Mecca), which many Muslims see as a blasphemy. It is difficult for Americans to understand why this is so bad—we even protect the right to burn and desecrate our own flag.
Our support for Israel. Since Israel is one of the few democracies in the Mideast, and since it’s culture doesn’t raise suicide bombers, it seems quite reasonable that we should support it—it’s the right thing to do. As an appeal to self-interest, we can always remember that over the past 105 years, 1.4 billion Muslims have produced only eight Nobel Laureates while a mere 14 million Jews have produced 167 Nobel Laureates.

Given the history of Islam’s relationship with all other belief systems, the outlook looks gloomy. If the past 1400 years are any guide, Islam will continue to be at war with Paganism, Atheism, Hinduism, Judaism, and Christianity—both in hot wars of conquest and in psychological battles for the hearts and minds of the world.[33]

Muslim Demographics
The 2025 report made a wise decision in covering demographic issues, since they are predictable. But it did not investigate the causal sources (personal and cultural beliefs) of crucial demographic trends. The report writes that “the radical Salafi trend of Islam is likely to gain traction” in “those countries that are likely to struggle with youth bulges and weak economic underpinnings. (Page ix)”

This is certainly an accurate prediction. But what human beliefs lead to behavior that leads to youth bulges and weak economies? The answer is quite complex, partially because the Quran is not crystal clear on this issue. But generally “Muslim religiosity and support for Shari’a Law are associated with higher fertility” and that better education, higher wealth, and urbanization do not reduce Muslim fertility (as it does with other religions). The result is that while religious fundamentalism in Islam does not boost fertility as much as it does for Jewish traditionalists in Israel, it is still true that “fertility dynamics could power increased religiosity and Islamism in the Muslim world in the twenty-first century.“[34]

Other Practical Aspects of Islam Theology
One of the reasons the Western world is at odds with Islam is because of different views on freedom and virtue. Americans generally value freedom over virtue. In Islam, however, virtue is far more important than freedom, despite the fact that virtue requires an act of free will. In other words, Muslims don’t seem to realize that if good behavior is forced, then it is not really virtuous. Meanwhile, here in the USA we seem to have forgotten that vices enslave us—as demonstrated by addictions to drugs, gambling, and sex; we have forgotten that true freedom requires us to be virtuous—that we must bridle our passions in order to be truly free.

A disturbing facet of Islam is that it requires the death of an apostate. Theologically, this is because Allah is master, not father or spouse (as most often portrayed in the Bible), and submission to Allah is mandatory in Islam. While it is true that Christianity authorized the secular authorities to burn a few thousand heretics over two thousand years, these were in extreme situations of maximum irrationality that were fixed fairly quickly hundreds of years ago (often a single thoughtful bishop or priest stopped an outbreak). In contrast, fatwahs demanding the death penalty for apostates and heretics are still common in Islamic countries.[35]

Theology, Technological Progress, and Cultural Success
Religions do not make people stupid or cowardly. President Bush may have called the 9/11 Islamic terrorists cowardly, but they were not. They went to their deaths as bravely as any American soldier. Nor were they stupid—otherwise they never would have been able to pull off the most devastating terrorist attack on the U.S. in our relatively short history, cleverly devising a way to use our open society and our technology to maximal effect. But as individuals they were deluded, and their culture could not design or build jumbo jets; hence they used ours. This means that Islamic terrorists will be glad to use nanotechnological weapons as eagerly as nuclear ones—once they get their hands on them. The problem, of course, is that nano-enhanced weapons will be much easier to develop than nuclear ones.

Conclusion
Ever since the time of the Pilgrims, Americans have considered themselves citizens of a “bright, shining city on the hill” and much of the world agreed, with immigrants pouring in for three centuries to build the most powerful nation in history. Our representative democracy and loosely-regulated capitalism, regulated by individual consciences based on a Judeo-Christian foundation of rights and responsibilities, has been copied all over the world (at least superficially). But will this shining city endure?

It is the task of the U.S. National Intelligence Council to make sure that it does, and their effort to understand the future is an important step in that direction. Hopefully they will examine more closely the impact that technology, especially productive nanosystems, will have on political structures. In addition, they need to understand the theological underpinnings of Islam, and how it will affect the technological capabilities of Muslim nations.

Addendum
For a better government-sponsored report on how technology will affect us, see Toffler Associates’ Technology and Innovation 2025 at http://www.toffler.com/images/Toffler_TechAndInnRep1-09.pdf.

——————————————————————————–

[1] National Intelligence Council, Global Trends 2025: A Transformed World http://www.dni.gov/nic/PDF_2025/2025_Global_Trends_Final_Report.pdf and www.dni.gov/nic/NIC_2025_project.html

[2] Earlier exceptions are rare, though technology has been lost occasionally—most notably 5th century Europe after the fall of the Roman Empire, and 15th century China after the last voyage of Admiral Zeng He’s Treasure Fleet of the Dragon Throne.

[3] Singularity Symposium, Exponential Growth and the Legend of Paal Paysam. http://www.singularitysymposium.com/exponential-growth.html

[4] Ray Kurzweil, The Law of Accelerating Returns. March 7, 2001. http://www.kurzweilai.net/articles/art0134.html?printable=1

[5] Matthew R. Simmons, Revisiting The Limits to Growth: Could The Club of Rome Have Been Correct, After All? (Part One). Sep 30 2000. http://www.energybulletin.net/node/1512 Note that technological optimists always quote the chess example, while environmental doomsayers always quote the lily pad example.

[6] High-performance lithium battery anodes using silicon nanowires, Candace K. Chan, Hailin Peng, Gao Liu, Kevin McIlwrath, Xiao Feng Zhang, Robert A. Huggins & Yi Cui, Nature Nanotechnology 3, 31 — 35 (2008). http://www.nature.com/nnano/journal/v3/n1/abs/nnano.2007.411.html

[7] See Nanotechnology’s biggest stories of 2008 http://www.newscientist.com/article/dn16340-nanotechnologys-biggest-stories-of-2008.html and Top Ten Nanotechnology Patents of 2008 http://tinytechip.blogspot.com/2008/12/top-ten-nanotechnology-patents-of-2008.html

[8] Paul Rothemund. Folding DNA to create nanoscale shapes and patterns, Nature, V440N16. March 2006.

[9] Christian E. Schafmeister. The Building Blocks of Molecular Nanotechnology. Conference on Productive Nanosystems: Launching the Technology Roadmap. Arlington, VA. Oct. 9–10, 2007.

[10] John N. Randall. A Path to Atomically Precise Manufacturing. Conference on Productive Nanosystems: Launching the Technology Roadmap. Arlington, VA. Oct. 9–10, 2007.

[11] Ralph Merkle and Robert Freitas Jr., “Theoretical analysis of a carbon-carbon dimer placement tool for diamond mechanosynthesis,” Journal of Nanoscience and Nanotechnology. 3(August 2003):319-324; http://www.rfreitas.com/Nano/JNNDimerTool.pdf

[12] Robert A. Freitas Jr. and Ralph C. Merkle, A Minimal Toolset for Positional Diamond Mechanosynthesis, Journal of Computational and Theoretical Nanoscience. Vol.5, 760–861, 2008

[13] Jingping Peng, Robert. Freitas, Jr., Ralph Merkle, James Von Ehr, John Randall, and George D. Skidmore. Theoretical Analysis of Diamond Mechanosynthesis. Part III. Positional C2 Deposition on Diamond C(110) Surface Using Si/Ge/Sn-Based Dimer Placement Tools. Journal of Computational and Theoretical Nanoscience. Vol.3, 28-41, 2006. http://www.molecularassembler.com/Papers/JCTNPengFeb06.pdf

[14] Adrian Bowyer, et al. RepRap-Wealth without money. http://reprap.org/bin/view/Main/WebHome

[15] John Storrs Hall, The Weather Machine. December 23, 2008, http://www.foresight.org/nanodot/?p=2922

[16] National Security Space Office. Space-Based Solar Power As an Opportunity for Strategic Security: Phase 0 Architecture Feasibility Study. http://www.scribd.com/doc/8736624/SpaceBased-Solar-Power-Interim-Assesment-01

[17] John Storrs Hall, Utility Fog: The Stuff that Dreams are Made Of. http://autogeny.org/Ufog.html

[18] John Storrs Hall, The Space Pier: A hybrid Space-launch Tower concept. http://autogeny.org/tower/tower.html

[19] Pacific Northwest National Laboratory, SAMMS: Self-Assembled Monolayers on Mesoporous Supports. http://samms.pnl.gov/

[20] OECD Nuclear Energy Agency. Trends in the nuclear fuel cycle: economic, environmental and social aspects, Organization for Economic Co-operation and Development 2001

[21] Mark Clayton. Will lasers brighten nuclear’s future? The Christian Science Monitor/ August 27, 2008. http://features.csmonitor.com/innovation/2008/08/27/will-lasers-brighten-nuclears-future/

[22] Paul Werbos, What should we be doing today to enhance world energy security, in order to reach a sustainable global energy system? http://www.werbos.com/energy.htm See also Robert Zubrin, Energy Victory: Winning the War on Terror by Breaking Free of Oil. Prometheus Books. November 2007.

[23] John Storrs Hall, The weather machine. December 23, 2008, http://www.foresight.org/nanodot/?p=2922

[24] Tihamer Toth-Fejel, A Few Lesser Implications of Nanofactories: Global Warming is the Least of our Problems, Nanotechnology Perceptions, March 2009.

[25] Exceptions would be small groups who were subject to selective pressure to increase intelligence, such as the Ashkenazi Jews.

[26] Joel Mokyr , The Lever of Riches: Technological Creativity and Economic Progress. Oxford University Press, USA (April 9, 1992). http://www.amazon.com/Lever-Riches-Technological-Creativity-Economic/dp/0195074777

[27] Zhao (Peter) Xiao, Market Economies With Churches and Market Economies Without Churches http://www.danwei.org/business/churches_and_the_market_econom.php

[28] ibid.

[29] Peter Kreeft, Ecumenical Jihad: Ecumenism and the Culture War, Ignatius Press (March 1996). More specifically, Kreeft points out that Muslims have lower rates of abortion, adultery, fornication, and sodomy; and higher rates of prayer and devotion to God. Kreeft then repeats the Biblical admonition that God blesses those who obey His commandments. For atheists and agnostics, it might be more palatable to think of it as evolution in action: If a group encourages behavior that reduces the number of capable offspring, then it is doomed.

[30] Farrukh Saleem, Muslims amongst world’s poorest weakest, illiterate: What Went Wrong. November 08, 2005 http://islamicterrorism.wordpress.com/2008/07/01/muslims-amongst-worlds-poorest-weakest-illiterate-what-went-wrong/

[31] ibid.

[32] Pope Benedict XVI. Faith, Reason and the University: Memories and Reflections. University of Regensburg, September 2006. http://www.vatican.va/holy_father/benedict_xvi/speeches/2006/september/documents/hf_ben-xvi_spe_20060912_university-regensburg_en.html

[33] Note that this report is not a critique of Muslim people—only their beliefs (though it may not feel that way to them).

[34] Kaufmann, E. P. , “Islamism, Religiosity and Fertility in the Muslim World,” Annual meeting of the ISA’s 50th Annual Convention: Exploring the Past, Anticipating the Future. New York, NY. Feb 13-15, 2009. http://www.allacademic.com/meta/p312181_index.html

[35] On the other hand (to put things in perspective), compared to the atheists Stalin, Mao, and Pol Pot, even the most deadly Muslims extremists are rank amateurs at mass murder. Perhaps that is why Communism has barely lasted two generations, while Islam has lasted fourteen centuries. You just can’t go around killing people.

Tihamer Toth-Fejel, MS
General Dynamics Advanced Information Systems
Michigan Research and Development Center


Paul J. Crutzen

Although this is the scenario we all hope (and work hard) to avoid — the consequences should be of interest to all who are interested in mitigation of the risk of mass extinction:

“WHEN Nobel prize-winning atmospheric chemist Paul Crutzen coined the word Anthropocene around 10 years ago, he gave birth to a powerful idea: that human activity is now affecting the Earth so profoundly that we are entering a new geological epoch.

The Anthropocene has yet to be accepted as a geological time period, but if it is, it may turn out to be the shortest — and the last. It is not hard to imagine the epoch ending just a few hundred years after it started, in an orgy of global warming and overconsumption.

Let’s suppose that happens. Humanity’s ever-expanding footprint on the natural world leads, in two or three hundred years, to ecological collapse and a mass extinction. Without fossil fuels to support agriculture, humanity would be in trouble. “A lot of things have to die, and a lot of those things are going to be people,” says Tony Barnosky, a palaeontologist at the University of California, Berkeley. In this most pessimistic of scenarios, society would collapse, leaving just a few hundred thousand eking out a meagre existence in a new Stone Age.

Whether our species would survive is hard to predict, but what of the fate of the Earth itself? It is often said that when we talk about “saving the planet” we are really talking about saving ourselves: the planet will be just fine without us. But would it? Or would an end-Anthropocene cataclysm damage it so badly that it becomes a sterile wasteland?

The only way to know is to look back into our planet’s past. Neither abrupt global warming nor mass extinction are unique to the present day. The Earth has been here before. So what can we expect this time?”

Read the entire article in New Scientist.

Also read “Climate change: melting ice will trigger wave of natural disasters” in the Guardian about the potential devastating effects of methane hydrates released from melting permafrost in Siberia and from the ocean floor.