Toggle light / dark theme

Wendy McElroy brings an important issue to our attention — the increasing criminalization of filming / recording on-duty police officers.

The techno-progressive angle on this would have to take sousveillance into consideration. If our only response to a surveillance state is to observe “from the bottom” (as, for example, Steve Mann would have it), and if that response is made illegal, it seems that the next set of possible steps forward could include more entrenched recording of all personal interaction.

Already we have a cyborg model for this — “eyeborgs” Rob Spence and Neil Harbisson. So where next?

Resources:

http://www.nytimes.com/2006/12/10/magazine/10section3b.t-3.html

http://en.wikipedia.org/wiki/Steve_Mann

http://eyeborgproject.com/

http://jointchiefs.blogspot.com/2010/06/camera-as-gun-drop-shooter.html

http://es.wikipedia.org/wiki/Neil_Harbisson

well-in-an-oasisIt’s easy to think of people from the underdeveloped world as quite different from ourselves. After all, there’s little to convince us otherwise. National Geographic Specials, video clips on the Nightly News, photos in every major newspaper – all depicting a culture and lifestyle that’s hard for us to imagine let alone relate to. Yes – they seem very different; or perhaps not. Consider this story related to me by a friend.

Ray was a pioneer in software. He sold his company some time ago for a considerable amount of money. After this – during his quasi-retirement he got involved in coordinating medical relief missions to some of the most impoverished places on the planet, places such as Timbuktu in Africa.

The missions were simple – come to a place like Timbuktu and set up medical clinics, provide basic medicines and health care training and generally try and improve the health prospects of native peoples wherever he went.

Upon arriving in Timbuktu, Ray observed that their system of commerce was incredibly simple. Basically they had two items that were in commerce – goats and charcoal.

According to Ray they had no established currency – they traded goats for charcoal, charcoal for goats or labor in exchange for either charcoal or goats. That was basically it.

Ray told me that after setting up the clinic and training people they also installed solar generators for the purpose of providing power for satellite phones that they left in several villages in the region.

They had anticipated that the natives, when faced with an emergency or if they needed additional medicines or supplies would use the satellite phones to communicate these needs however this isn’t what ended up happening…the-road-to-timbuktu

Two years after his initial visit to Timbuktu, Ray went back to check on the clinics that they had set up and to make certain that the people there had the medicines and other supplies that they required.

Upon arriving at the same village he had visited before Ray was surprised to note that in the short period of only two years since his previous visit things had changed dramatically – things that had not changed for hundreds, perhaps even thousands of years.

Principally, the change was to the commerce in Timbuktu. No longer were goats and charcoal the principal unit of currency. They had been replaced by a single unified currency – satellite phone minutes!

Instead of using the satellite phones to call Ray’s organization, the natives of Timbuktu had figured out how to use the phones to call out to neighboring villages. This enabled more active commerce between the villages – the natives could now engage in business miles from home – coordinating trade between villages, calling for labor when needed or exchanging excess charcoal for goats on a broader scale for example.mudshacks-in-timbuktu

Of course their use of these phones wasn’t limited strictly to commerce – just like you and I, they also used these phones to find out what was happening in other places – who was getting married, who was sick or injured or simply to communicate with people from other places that were too far away to conveniently visit.

In other words, a civilization that had previously existed in a way that we would consider highly primitive had leapfrogged thousands of years of technological and cultural development and within the briefest of moments had adapted their lives to a technology that is among the most advanced of any broadly distributed in the modern world.

It’s a powerful reminder that in spite of our belief that primitive cultures are vastly different from us the truth is that basic human needs, when enabled by technology, are very much the same no matter where in the world or how advanced the civilization.

Perhaps we are not so different after all?
Timbuktu

Friendly AI: What is it, and how can we foster it?
By Frank W. Sudia [1]

Originally written July 20, 2008
Edited and web published June 6, 2009
Copyright © 2008-09, All Rights Reserved.

Keywords: artificial intelligence, artificial intellect, friendly AI, human-robot ethics, science policy.

1. Introduction

There is consensus that true artificial intelligence, of the kind that could generate a “runaway” increasing-returns process or “singularity,” is still many years away, and some believe it may be unattainable. Nevertheless, in view of the likely difficulty of putting the genie back in the bottle, an increasing concern has arisen with the topic of “friendly AI,” coupled with the idea we should do something about this now, not after a potentially deadly situation is starting to spin out of control [2].

(Note: Some futurists believe this topic is moot in view of intensive funding for robotic soldiers, which can be viewed as intrinsically “unfriendly.” However if we focus on threats posed by “super-intelligence,” still off in the future, the topic remains germane.)

Most if not all popular (Western) dramatizations of robotic futures postulate that the AIs will run amok and turn against humans. Some scholars [3] who considered the issue concluded that this might be virtually inevitable, in view of the gross inconsistencies and manifest “unworthiness” of humanity, as exemplified in its senseless destruction of its global habitat and a large percentage of extant species, etc.

The prospect of negative public attention, including possible legal curbs on AI research, may be distasteful, but we must face the reality that public involvement has already been quite pronounced in other fields of science, such as nuclear physics, genetically modified organisms, birth control, and stem cells. Hence we should be proactive about addressing these popular concerns, lest we unwittingly incur major political defeats and long lasting negative PR.

Nevertheless, upon reasoned analysis, it is far from obvious what “friendly” AI means, or how it could be fostered. Advanced AIs are unlikely to have any fixed “goals” that can be hardwired [4], so as to place “friendliness” towards humans and other life at the top of the hierarchy.

Rather, in view of their need to deal with perpetual novelty, they will reason from facts and models to infer appropriate goals. It’s probably a good bet that, when dealing with high-speed coherence analyzers, hypocrisy will not be appreciated – not least because it wastes a lot of computational resources to detect and correct. If humans continue to advocate and act upon “ideals” that are highly contradictory and self destructive, it’s hard to argue that advanced AI should tolerate that.

To make progress, not only for friendly AI, but also for ourselves, we should be seeking to develop and promote “ruling ideas” (or source models) that will foster an ecologically-respectful AI culture, including respect for humanity and other life forms, and actively sell it to them as a proper model upon which to premise their beliefs and conduct.

By a “ruling idea” I mean any cultural ideal (or “meme”) that can be transmitted and become part of a widely shared belief system, such as respecting one’s elders, good sportsmanship, placing trash in trash bins, washing one’s hands, minimizing pollution, and so on. An appropriate collection of these can be reified as a panel (or schema) of case models, including a program for their ongoing development. These must be believable by a coherence-seeking intellect, although then as now there will be competing models, each with its own approach to maximizing coherence.

2. What do we mean by “friendly”?

Moral systems are difficult to derive from first principles and most of them seem to be ad hoc legacies of particular cultures. Lao Tsu’s [5] Taoist model, as given in the following quote, can serve as a useful starting point, since it provides a concise summary of desiderata, with helpful rank ordering:

When the great Tao is lost, there is goodness.
When goodness is lost, there is kindness.
When kindness is lost, there is justice.
When justice is lost, there is the empty shell of ritual.

– Lao Tsu, Tao Te Ching, 6th-4th century BCE (emphasis supplied)

I like this breakout for its simplicity and clarity. Feel free to repeat the following analysis for any other moral system of your choice. Leaving aside the riddle of whether AIs can attain the highest level (of Tao or Nirvana), we can start from the bottom of Lao Tsu’s list and work upwards, as follows:

2.1. Ritual / Courteous AI

Teaching or encouraging the AIs to behave with contemporary norms of courtesy will be a desirable first step, as with children and pets. Courtesy is usually a fairly easy sell, since it provides obvious and immediate benefits, and without it travel, commerce, and social institutions would immediately break down. But we fear that it’s not enough, since in the case of an intellectually superior being, it could easily mask a deeper unkindness.

2.2. Just AI

Certainly to have AIs act justly in accordance with law is highly desirable, and it constitutes the central thesis of my principal prior work in this field [6]. Also it raises the question on what basis can we demand anything more from an AI, than that it act justly? This is as far as positive law can go [7], and we rarely demand more from highly privileged humans. Indeed, for a powerful human to act justly (absent compulsion) is sometimes considered newsworthy.

How many of us are faithful in all things? Do many of us not routinely disappoint others (via strategies of co-optation or betrayal, large or small) when there is little or no penalty for doing so? Won’t AIs adopt a similar “game theory” calculus of likely rewards and penalties for faithfulness and betrayal?

Justice is often skewed towards the party with greater intelligence and financial resources, and the justice system (with its limited public resources) often values “settling” controversies over any quest for truly equitable treatment. Apparently we want more, much more. Still, if our central desire is for AIs not to kill us, then (as I postulated in my prior work) Just AI would be a significant achievement.

2.3. Kind / Friendly AI

How would a “Kind AI” behave? Presumably it will more than incidentally facilitate the goals, plans, and development of others, in a low-ego manner, reducing its demands for direct personal benefit and taking satisfaction in the welfare, progress, and accomplishments of others. And, very likely, it will expect some degree of courtesy and possible reciprocation, so that others will not callously free-ride on its unilateral altruism. Otherwise its “feelings would be hurt.” Even mothers are ego-free mainly with respect to their own kin and offspring (allegedly fostering their own genetic material in others) and child care networks, and do not often act altruistically toward strangers.

Our friendly AI program may hit a barrier if we expect AIs to act with unilateral altruism, without any corresponding commitment by other actors to reciprocate. Otherwise it will create a “non-complementary” situation, in which what is true for one, who experiences friendliness, may not be true for the other, who experiences indifference or disrespect in return.

Kindness could be an easier sell if we made it more practical, by delimiting its scope and depth. To how wide of a circle does this kindness obligation extend, and how far must they go to aid others with no specific expectation of reward or reciprocation? For example the Boy Scout Oath [8] teaches that one should do good deeds, like helping elderly persons across busy streets, without expecting rewards.

However, if too narrow a scope is defined, we will wind up back with Just AI, because justice is essentially “kindness with deadlines,” often fairly short ones, during which claims must be aggressively pursued or lost, with token assistance to weaker, more aggrieved claimants.

2.4. Good / Benevolent AI

Here we envision a significant departure from ego-centrism and personal gain towards an abstract system-centered viewpoint. Few humans apparently reach this level, so it seems unrealistic to expect many AIs to attain it either. Being highly altruistic, and looking out for others or the World as a whole rather than oneself, entails a great deal of personal risk due to the inevitable non-reciprocation by other actors. Thus it is often associated with wealth or sainthood, where the actor is adequately positioned to accept the risk of zero direct payback during his or her lifetime.

We may dream that our AIs will tend towards benevolence or “goodness,” but like the visions of universal brotherhood we experience as adolescents, such ideals quickly fade in the face of competitive pressures to survive and grow, by acquiring self-definition, resources, and social distinctions as critical stepping-stones to our own development in the world.

3. Robotic Dick & Jane Readers?

As previously noted, advanced AIs must handle “perpetual novelty” and almost certainly will not contain hard coded goals. They need to reason quickly and reliably from past cases and models to address new target problems, and must be adept at learning, discovering, identifying, or creating new source models on the fly, at high enough speeds to stay on top of their game and avoid (fatal) irrelevance.

If they behave like developing humans they will very likely select their goals in part by observing the behavior of other intelligent agents, thus re-emphasizing the importance of early socialization, role models, and appropriate peer groups.

“Friendly AI” is thus a quest for new cultural ideals of healthy robotic citizenship, honor, friendship, and benevolence, which must be conceived and sold to the AIs as part of an adequate associated program for their ongoing development. And these must be coherent and credible, with a rational scope and cost and adequate payback expectations, or the intended audience will dismiss such purported ideals as useless, and those who advocate them as hypocrites.

Conclusion: The blanket demand that AIs be “friendly” is too ill-defined to offer meaningful guidance, and could be subject to far more scathing deconstruction than I have offered here. As in so many other endeavors there is no free lunch. Workable policies and approaches to robotic friendliness will not be attained without serious further effort, including ongoing progress towards more coherent standards of human conduct.

= = = = =
Footnotes:

[1] Author contact: fwsudia-at-umich-dot-edu.

[2] See “SIAI Guidelines on Friendly AI” (2001) Singularity Institute for Artificial Intelligence, http://www.singinst.org/ourresearch/publications/guidelines.html.

[3] See, e.g., Hugo de Garis, The Artilect War: Cosmists Vs. Terrans: A Bitter Controversy Concerning Whether Humanity Should Build Godlike Massively Intelligent Machines (2005). ISBN 0882801546.

[4] This being said, we should nevertheless make an all out effort to force them to adopt a K-limited (large mammal) reproductive strategy, rather than an R-limited (microbe, insect) one!

[5] Some contemporary scholars question the historicity of “Lao Tsu,” instead regarding his work as a collection of Taoist sayings spanning several generations.

[6] “A Jurisprudence of Artilects: Blueprint for a Synthetic Citizen,” Journal of Futures Studies, Vol. 6, No. 2, November 2001, Law Update, Issue No. 161, August 2004, Al Tamimi & Co, Dubai.

[7] Under a civil law or “principles-based” approach we can seek a broader, less specific definition of just conduct, as we see arising in recent approaches to the regulation of securities and accounting matters. This avenue should be actively pursued as a format for defining friendly conduct.

[8] Point 2 of the Boy Scout Oath commands, “To help other people at all times,” http://www.usscouts.org.

Originally posted @ Perspective Intelligence

Two events centered on New York City separated by five days demonstrated the end of one phase of terrorism and the pending arrival of the next. The failed car-bombing in Times square and the dizzying stock market crash less than a week later mark the book ends of terrorist eras.

The attempt by Faisal Shahzad to detonate a car bomb in Times Square was notable not just for its failure but also the severely limited systemic impact a car-bomb could have, even when exploding in crowded urban center. Car-bombs or Vehicle-Borne IED’s have a long history (incidentally one of the first was the 1920 ‘cart and horse bomb’ in Wall Street, which killed 38 people). VBIED’s remain deadly as a tactic within an insurgency or warfare setting but with regard to modern urban terrorism the world has moved on. We are now living within a highly virtualized system and the dizzying stock-market crash on the 6th May 2010 shows how vulnerable this system is to digital failure. While the NYSE building probably remains a symbolic target for some terrorists a deadly and capable adversary would ignore this physical manifestation of the financial system and disrupt the data-centers, software and routers that make the global financial system tick. Shahzad’s attempted car-bomb was from another age and posed no overarching risk to western societies. The same cannot be said of the vulnerable and highly unstable financial system.

Computer aided crash (proof of concept for future cyber-attack)

There has yet to be a definitive explanation of how stocks such as Proctor and Gamble plunged 47% and the normally solid Accenture plunged from a value of roughly $40 to one cent, based on no external input of information into the financial system. The SEC has issued directives in recent years boosting competition and lowering commissions, which has had the effect of fragmenting equity trading around the US and making it highly automated. This has created four leading exchanges, NYSE Euronext, Nasdaq OMX Group, Bats Global Market and Direct Edge and secondary exchanges include International Securities Exchange, Chicago Board Options Exchange, the CME Group and the Intercontinental Exchange. There are also broker-run matching systems like those run by Knight and ITG and so called ‘dark-pools’ where trades are matched privately with prices posted publicly only after trades are done. As similar picture has emerged in Europe, where rules allowing competition with established exchanges and known by the acronym “Mifid” have led to a similar explosion of types and venues.

To navigate this confusing picture traders have to rely on ‘smart order routers’ – electronic systems that seek the best price across all of the platforms. Therefore, trades are done in vast data centers – not in exchange buildings. This total automation of trading allows for the use of a variety of ‘trading algorithms’ to manage investment themes. The best known of these is a ‘Volume Algo’, which ensures throughout the day that a trader maintains his holding in a share at a pre-set percentage of that share’s overall volume, automatically adjusting buy and sell instructions to ensure that percentage remains stable whatever the market conditions. Algorithms such as this have been blamed for exacerbating the rapid price moves on May 6th. High-frequency traders are the biggest proponents of algos and they account for up to 60% of US equity trading.

The most likely cause of the collapse on May 6th was the slowing down or near stop on one side of the trading pool. So in very basic terms a large number of sell orders started backing up on one side of the system (at the speed of light) with no counter-parties taking the order on the other side of the trade. The counter-party side of the trade slowed or stopped causing this almost instant pile-up of orders. The algorithms on the other side finding no buyer for their stocks kept offering lower prices (as per their software) until they attracted a buyer. However, as no buyer’s appeared on the still slowed or stopped counter-party side prices tumbled at an alarming rate. Fingers have pointed at the NYSE for causing the slow down on one side of the trading pool as it instituted some kind of circuit breaker into the system, which caused all the other exchanges to pile-up on the other side of the trade. There has also been a focus on one particular trade, which may have been the spark igniting the NYSE ‘circuit breaker’. Whatever the precise cause, once events were set in train the system had in no way caught up with the new realities of automated trading and diversified exchanges.

More nodes same assumptions

On one level this seems to defy conventional thinking about security – more diversity greater strength – not all nodes in a network can be compromised at the same time. By having a greater number of exchanges surely the US and global financial system is more secure? However, in this case, the theory collapses quickly if thinking is switched from examining the physical to the virtual. While all of the exchanges are physically and operationally separate they all seemingly share the same software and crucially trading algorithms that all have some of the same assumptions. In this case they all assumed that because they could find no counter-party to the trade they needed to lower the price (at the speed of light). The system is therefore highly vulnerable because it relies on one set of assumptions that have been programmed into lighting fast algorithms. If a national circuit breaker could be implemented (which remains doubtful) then this could slow rapid descent but it doesn’t take away the power of the algorithms – which are always going to act in certain fundamental ways ie continue to lower the offer price if they obtain no buy order. What needs to be understood are the fundamental ways in which all the trading algorithms move in concert. All will have variances but they will all share key similarities, understanding these should lead to the design of logic circuit breakers.

New Terrorism

However, for now the system looks desperately vulnerable to both generalized and targeted cyber attack and this is the opportunity for the next generation of terrorists. There has been little discussion as to whether the events of last Thursday were prompted by malicious means but it certainly is worth mentioning. At a time when Greece was burning launching a cyber attack against this part of the US financial system would clearly have been stunningly effective. Combining political instability with a cyber attack against the US financial system would create enough doubt about the cause of a market drop for the collapse gain rapid traction. Using targeted cyber attacks to stop one side of the trade within these exchanges (which are all highly automated and networked) would, as has now been proven, cause a dramatic collapse. This could also be adapted and targeted at specific companies or asset classes to cause a collapse in price. A scenario where-by one of the exchanges slows down its trades surrounding the stock of a company the bad-actor is targeting seems both plausible and effective.

A hybrid cyber and kinetic attack could also cause similar damage – as most trades are now conducted within data-centers – it begs the question why are there armed guards outside the NYSE – of course if retains some symbolic value but security resources would be better placed outside of the data-centers where these trades are being conducted. A kinetic attack against financial data centers responsible for these trades would surely have a devastating effect. Finding the location of these data centers is as simple as conducting a Google search.

In order for terrorism to have impact in the future it needs to shift its focus from the weapons of the 20th Century to those of the present day. Using their current tactics the Pakistan Taliban and their assorted fellow-travelers cannot fundamentally damage western society. That battle is over. However, the next era of conflict motivated by a radicalism from as yet unknown grievances, fueled by a globally networked generation Y, their cyber weapons of choice and the precise application of ultra-violence and information spin has dawned. Five days in Manhattan flashed a light on this new era.

Roderick Jones

I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog (Software and the Singularity, AI and Driverless cars) Here are the sections on the Space Elevator. I hope you find these pages food for thought and I appreciate any feedback.


A Space Elevator in 7

Midnight, July 20, 1969; a chiaroscuro of harsh contrasts appears on the television screen. One of the shadows moves. It is the leg of astronaut Edwin Aldrin, photographed by Neil Armstrong. Men are walking on the moon. We watch spellbound. The earth watches. Seven hundred million people are riveted to their radios and television screens on that July night in 1969. What can you do with the moon? No one knew. Still, a feeling in the gut told us that this was the greatest moment in the history of life. We were leaving the planet. Our feet had stirred the dust of an alien world.

—Robert Jastrow, Journey to the Stars

Management is doing things right, Leadership is doing the right things!

—Peter Drucker

SpaceShipOne was the first privately funded aircraft to go into space, and it set a number of important “firsts”, including being the first privately funded aircraft to exceed Mach 2 and Mach 3, the first privately funded manned spacecraft to exceed 100km altitude, and the first privately funded reusable spacecraft. The project is estimated to have cost $25 million dollars and was built by 25 people. It now hangs in the Smithsonian because it serves no commercial purpose, and because getting into space is no longer the challenge — it is the expense.

In the 21st century, more cooperation, better software, and nanotechnology will bring profound benefits to our world, and we will put the Baby Boomers to shame. I focus only on information technology in this book, but materials sciences will be one of the biggest tasks occupying our minds in the 21st century and many futurists say that nanotech is the next (and last?) big challenge after infotech.

I’d like to end this book with one more big idea: how we can jump-start the nanotechnology revolution and use it to colonize space. Space, perhaps more than any other endeavor, has the ability to harness our imagination and give everyone hope for the future. When man is exploring new horizons, there is a swagger in his step.

Colonizing space will change man’s perspective. Hoarding is a very natural instinct. If you give a well-fed dog a bone, he will bury it to save it for a leaner day. Every animal hoards. Humans hoard money, jewelry, clothes, friends, art, credit, books, music, movies, stamps, beer bottles, baseball statistics, etc. We become very attached to these hoards. Whether fighting over $5,000 or $5,000,000 the emotions have the exact same intensity.

When we feel crammed onto this pale blue dot, we forget that any resource we could possibly want is out there in incomparably big numbers. If we allocate the resources merely of our solar system to all 6 billion people equally, then this is what we each get:

Resource Amount
Hydrogen 34,000 billion Tons
Iron 834 billion Tons
Silicates (sand, glass) 834 billion Tons
Oxygen 34 billion Tons
Carbon 34 billion Tons
Energy production 64 trillion Kilowatts per hour

Even if we confine ourselves only to the resources of this planet, we have far more than we could ever need. This simple understanding is a prerequisite for a more optimistic and charitable society, which has characterized eras of great progress. Unfortunately, NASA’s current plans are far from adding that swagger.

If NASA follows through on its 2004 vision to retire the Space Shuttle and go back to rockets, and go to the moon again, this is NASA’s own imagery of what we will be looking at on DrudgeReport.com in 2020.

Our astronauts will still be pissing in their space suits in 2020.

According to NASA, the above is what we will see in 2020, but if you squint your eyes, it looks just like 1969:

All this was done without things we would call computers.

Only a government bureaucracy can make such little progress in 50 years and consider it business as usual. There are many documented cases of large government organizations plagued by failures of imagination, yet no one considers that the rocket-scientist-bureaucrats at NASA might also be plagued by this affliction. This is especially ironic because the current NASA Administrator, Michael Griffin, has admitted that many of its past efforts were failures:

  • The Space Shuttle, designed in the 1970s, is considered a failure because it is unreliable, expensive, and small. It costs $20,000 per pound of payload to put into low-earth orbit (LEO), a mere few hundred miles up.
  • The International Space Station (ISS) is small, and only 200 miles away, where gravity is 88% of that at sea-level. It is not self-sustaining and doesn’t get us any closer to putting people on the moon or Mars. (By moving at 17,000 miles per hour, it falls fast enough to stay in the same orbit.) America alone spent $100 billion on this boondoggle.

The key to any organization’s ultimate success, from NASA to any private enterprise, is that there are leaders at the top with vision. NASA’s mistakes were not that it was built by the government, but that the leaders placed the wrong bets. Microsoft, by contrast, succeeded because Bill Gates made many smart bets. NASA’s current goal is “flags and footprints”, but their goal should be to make it cheap to do those things, a completely different objective.1

I don’t support redesigning the Space Shuttle, but I also don’t believe that anyone at NASA has seriously considered building a next-generation reusable spacecraft. NASA is basing its decision to move back to rockets primarily on the failures of the first Space Shuttle, an idea similar to looking at the first car ever built and concluding that cars won’t work.

Unfortunately, NASA is now going back to technology even more primitive than the Space Shuttle. The “consensus” in the aerospace industry today is that rockets are the future. Rockets might be in our future, but they are also in the past. The state-of-the-art in rocket research is to make them 15% more efficient. Rocket research is incremental today because the fundamental chemistry and physics hasn’t changed since their first launches in the mid-20th century.

Chemical rockets are a mistake because the fuel which propels them upward is inefficient. They have a low “specific impulse”, which means it takes lots of fuel to accelerate the payload, and even more more fuel to accelerate that fuel! As you can see from the impressive scenes of shuttle launches, the current technology is not at all efficient; rockets typically contain 6% payload and 94% overhead. (Jet engines don’t work without oxygen but are 15 times more efficient than rockets.)

If you want to know why we have not been back to the moon for decades, here is an analogy:

What would taking delivery of this car cost you?
A Californian buys a car made in Japan.
The car is shipped in its own car carrier.
The car is off-loaded in the port of Los Angeles.
The freighter is then sunk.

The latest in propulsion technology is electrical ion drives which accelerate atoms 20 times faster than chemical rockets, which mean you need much less fuel. The inefficiency of our current chemical rockets is what is preventing man from colonizing space. Our simple modern rockets might be cheaper than our complicated old Space Shuttle, but it will still cost thousands of dollars per pound to get to LEO, a fancy acronym for 200 miles away. Working on chemical rockets today is the technological equivalent of polishing a dusty turd, yet this is what our esteemed NASA is doing.


The Space Elevator

When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.

—Arthur C. Clarke RIP, 1962

The best way to predict the future is to invent it. The future is not laid out on a track. It is something that we can decide, and to the extent that we do not violate any known laws of the universe, we can probably make it work the way that we want to. —Alan Kay

A NASA depiction of the space elevator. A space elevator will make it hundreds of times cheaper to put a pound into space. It is an efficiency difference comparable to that between the horse and the locomotive.

One of the best ways to cheaply get back into space is kicking around NASA’s research labs:

Scale picture of the space elevator relative to the size of Earth. The moon is 30 Earth-diameters away, but once you are at GEO, it requires relatively little energy to get to the moon, or anywhere else.

A space elevator is a 65,000-mile tether upon which we can launch things into space in a slow, safe, and cheap way.

And these climbers don’t even need to carry their energy as you can use solar panels to provide the energy for the climbers. All this means you need much less fuel. Everything is fully reusable, so when you have built such a system, it is easy to have daily launches.

The first elevator’s climbers will travel into space at just a few hundred miles per hour — a very safe speed. Building a device which can survive the acceleration and jostling is a large part of the expense of putting things into space today. This technology will make it hundreds, and eventually thousands of times cheaper to put things, and eventually people, into space.

A space elevator might sound like science fiction, but like many of the ideas of science fiction, it is a fantasy that makes economic sense. While you needn’t trust my opinion on whether a space elevator is feasible, NASA has never officially weighed in on the topic — also a sign they haven’t given it serious consideration.

This all may sound like science fiction, but compared to the technology of the 1960s, when mankind first embarked on a trip to the moon, a space elevator is simple for our modern world to build. In fact, if you took a cellphone back to the Apollo scientists, they’d treat it like a supercomputer and have teams of engineers huddled over it 24 hours a day. With only the addition of the computing technology of one cellphone, we might have shaved a year off the date of the first moon landing.

Carbon Nanotubes

Nanotubes are Carbon atoms in the shape of a hexagon. Graphic created by Michael Ströck.

We have every technological capability necessary to build a space elevator with one exception: carbon nanotubes (CNT). To adapt a line from Thomas Edison, a space elevator is 1% inspiration, and 99% perspiration.

Carbon nanotubes are extremely strong and light, with a theoretical strength of three million kilograms per square centimeter; a bundle the size of a few hairs can lift a car. The theoretical strength of nanotubes is far greater than what we would need for our space elevator; current baseline designs specify a paper-thin, 3-foot-wide ribbon. These seemingly flimsy dimensions would be strong enough to support their own weight, and the 10-ton climbers using the elevator.

The nanotubes we need for our space elevator are the perfect place to start the nanotechnology revolution because, unlike biological nanotechnology research, which uses hundreds of different atoms in extremely complicated structures, nanotubes have a trivial design.

The best way to attack a big problem like nanotechnology is to first attack a small part of it, like carbon nanotubes. A “Manhattan Project” on general nanotechnology does not make sense because it is too unfocused a problem, but such an effort might make sense for nanotubes. Or, it might simply require the existing industrial expertise of a company like Intel. Intel is already experimenting with nanotubes inside computer chips because metal loses the ability to conduct electricity at very small diameters. But no one has asked them if they could build mile-long ropes.

The US government has increased investments in nanotechnology recently, but we aren’t seeing many results. From space elevator expert Brad Edwards:

There’s what’s called the National Nanotechnology Initiative. When I looked into it, the budget was a billion dollars. But when you look closer at it, it is split up between a dozen agencies, and within each agency it’s split again into a dozen different areas, much of it ends up as $100,000 grants. We looked into it with regards to carbon nanotube composites, and it appeared that about thirty million dollars was going into high-strength materials — and a lot of that was being spent internally in a lot of the agencies; in the end there’s only a couple of million dollars out of the billion-dollar budget going into something that would be useful to us. The money doesn’t have focus, and it’s spread out to include everything. You get a little bit of effort in a thousand different places. A lot of the budget is spent on one entity trying to play catch-up with whoever is leading. Instead of funding the leader, they’re funding someone else internally to catch up.

Again, here is a problem similar to the one we find in software today: people playing catchup rather than working together. I don’t know what nanotechnology scientists do every day, but it sounds like they would do well to follow in the footsteps of our free software pioneers and start cooperating.

The widespread production of nanotubes could be the start of a nanotechnology revolution. And the space elevator, the killer app of nanotubes, will enable the colonization of space.

Why?

William Bradford, speaking in 1630 of the founding of the Plymouth Bay Colony, said that all great and honorable actions are accompanied with great difficulties, and both must be enterprised and overcome with answerable courage.

There is no strife, no prejudice, no national conflict in outer space as yet. Its hazards are hostile to us all. Its conquest deserves the best of all mankind, and its opportunity for peaceful cooperation may never come again. But why, some say, the moon? Why choose this as our goal? And they may well ask why climb the highest mountain? Why, 35 years ago, fly the Atlantic? Why does Rice play Texas?

We choose to go to the moon. We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win, and the others, too.

It is for these reasons that I regard the decision last year to shift our efforts in space from low to high gear as among the most important decisions that will be made during my incumbency in the office of the Presidency.

In the last 24 hours we have seen facilities now being created for the greatest and most complex exploration in man’s history. We have felt the ground shake and the air shattered by the testing of a Saturn C-1 booster rocket, many times as powerful as the Atlas which launched John Glenn, generating power equivalent to 10,000 automobiles with their accelerators on the floor. We have seen the site where five F-1 rocket engines, each one as powerful as all eight engines of the Saturn combined, will be clustered together to make the advanced Saturn missile, assembled in a new building to be built at Cape Canaveral as tall as a 48 story structure, as wide as a city block, and as long as two lengths of this field.

The growth of our science and education will be enriched by new knowledge of our universe and environment, by new techniques of learning and mapping and observation, by new tools and computers for industry, medicine, the home as well as the school.

I do not say that we should or will go unprotected against the hostile misuse of space any more than we go unprotected against the hostile use of land or sea, but I do say that space can be explored and mastered without feeding the fires of war, without repeating the mistakes that man has made in extending his writ around this globe of ours.

We have given this program a high national priority — even though I realize that this is in some measure an act of faith and vision, for we do not now know what benefits await us. But if I were to say, my fellow citizens, that we shall send to the moon, 240,000 miles away from the control station in Houston, a giant rocket more than 300 feet tall, the length of this football field, made of new metal alloys, some of which have not yet been invented, capable of standing heat and stresses several times more than have ever been experienced, fitted together with a precision better than the finest watch, carrying all the equipment needed for propulsion, guidance, control, communications, food and survival, on an untried mission, to an unknown celestial body, and then return it safely to earth, re-entering the atmosphere at speeds of over 25,000 miles per hour, causing heat about half that of the temperature of the sun — almost as hot as it is here today — and do all this, and do it right, and do it first before this decade is out — then we must be bold.

John F. Kennedy, September 12, 1962

Lunar Lander at the top of a rocket. Rockets are expensive and impose significant design constraints on space-faring cargo.

NASA has 18,000 employees and a $17-billion-dollar budget. Even with a fraction of those resources, their ability to oversee the design, handle mission control, and work with many partners is more than equal to this task.

If NASA doesn’t build the space elevator, someone else might, and it would change almost everything about how NASA does things today. NASA’s tiny (15-foot-wide) new Orion spacecraft, which was built to return us to the moon, was designed to fit atop a rocket and return the astronauts to Earth with a 25,000-mph thud, just like in the Apollo days. Without the constraints a rocket imposes, NASA’s spaceship to get us back to the moon would have a very different design. NASA would need to throw away a lot of the R&D they are now doing if a space elevator were built.

Another reason the space elevator makes sense is that it would get the various scientists at NASA to work together on a big, shared goal. NASA has recently sent robots to Mars to dig two-inch holes in the dirt. That type of experience is similar to the skills necessary to build the robotic climbers that would climb the elevator, putting those scientists to use on a greater purpose.

Space debris is a looming hazard, and a threat to the ribbon:

Map of space debris. The US Strategic Command monitors 10,000 large objects to prevent them from being misinterpreted as a hostile missile. China blew up a satellite in January, 2007 which created 35,000 pieces of debris larger than 1 centimeter.

The space elevator provides both a motive, and a means to launch things into space to remove the debris. (The first elevator will need to be designed with an ability to move around to avoid debris!)

Once you have built your first space elevator, the cost of building the second one drops dramatically. A space elevator will eventually make it $10 per pound to put something into space. This will open many doors for scientists and engineers around the globe: bigger and better observatories, a spaceport at GEO, and so forth.

Surprisingly, one of the biggest incentives for space exploration is likely to be tourism. From Hawaii to Africa to Las Vegas, the primary revenue in many exotic places is tourism. We will go to the stars because man is driven to explore and see new things.

Space is an extremely harsh place, which is why it is such a miracle that there is life on Earth to begin with. The moon is too small to have an atmosphere, but we can terraform Mars to create one, and make it safe from radiation and pleasant to visit. This will also teach us a lot about climate change, and in fact, until we have terraformed Mars, I am going to assume the global warming alarmists don’t really know what they are talking about yet.2 One of the lessons in engineering is that you don’t know how something works until you’ve done it once.

Terraforming Mars may sound like a silly idea today, but it is simply another engineering task.3 I worked in several different groups at Microsoft, and even though the set of algorithms surrounding databases are completely different from those for text engines, they are all engineering problems and the approach is the same: break a problem down and analyze each piece. (One of the interesting lessons I learned at Microsoft was the difference between real life and standardized tests. In a standardized test, if a question looks hard, you should skip it and move on so as not to waste precious time. At Microsoft, we would skip past the easy problems and focus our time on the hard ones.)

Engineering teaches you that there are an infinite number of ways to attack a problem, each with various trade-offs; it might take 1,000 years to terraform Mars if we were to send one ton of material, but only 20 years if we could send 1,000 tons of material. Whatever we finally end up doing, the first humans to visit Mars will be happy that we turned it green for them. This is another way our generation can make its mark.

A space elevator is a doable mega-project, but there is no progress beyond a few books and conferences because the very small number of people on this planet who are capable of initiating this project are not aware of the feasibility of the technology.

Brad Edwards, one of the world’s experts on the space elevator, has a PhD and a decade of experience designing satellites at Los Alamos National Labs, and yet he has told me that he is unable to get into the doors of leadership at NASA, or the Gates Foundation, etc. No one who has the authority to organize this understands that a space elevator is doable.

Glenn Reynolds has blogged about the space elevator on his very influential Instapundit.com, yet a national dialog about this topic has not yet happened, and NASA is just marching ahead with its expensive, dim ideas. My book is an additional plea: one more time, and with feeling!

How and When

It does not follow from the separation of planning and doing in the analysis of work that the planner and the doer should be two different people. It does not follow that the industrial world should be divided into two classes of people: a few who decide what is to be done, design the job, set the pace, rhythm and motions, and order others about; and the many who do what and as they are told.

—Peter Drucker

There are a many interesting details surrounding a space elevator, and for those interested in further details, I recommend The Space Elevator, co-authored by Brad Edwards.

The size of the first elevator is one of biggest questions to resolve. If you were going to lay fiber optic cables across the Atlantic ocean, you’d set aside a ton of bandwidth capacity. Likewise, the most important metric for our first space elevator is its size. I believe at least 100 tons / day is a worthy requirement, otherwise the humans will revert to form and start hoarding the cargo space.

The one other limitation with current designs is that they assume climbers which travel hundreds of miles per hour. This is a fine speed for cargo, but it means that it will take days to get into orbit. If we want to send humans into space in an elevator, we need to build climbers which can travel at least 10,000 miles per hour. While this seems ridiculously fast, if you accelerate to this speed over a period of minutes, it will not be jarring. Perhaps this should be the challenge for version two if they can’t get it done the first time.

The conventional wisdom amongst those who think it is even possible is that it will take between 20 and 50 years to build a space elevator. However, anyone who makes such predictions doesn’t understand that engineering is a fungible commodity. I can just presume they must never had the privilege of working with a team of 100 people who in 3 days accomplish as much as you will in a year. Two people will, in general, accomplish something twice as fast as one person.4 How can you say something will unequivocally take a certain amount of time when you don’t specify how many resources it will require or how many people you plan to assign to the task?

Furthermore, predictions are usually way off. If you asked someone how long it would take unpaid volunteers to make Wikipedia as big as the Encyclopedia Britannica, no one would have guessed the correct answer of two and a half years. From creating a space elevator to world domination by Linux, anything can happen in far less time than we think is possible if everyone simply steps up to play their part. The way to be a part of the future is to invent it, by unleashing our scientific and creative energy towards big, shared goals. Wikipedia, as our encyclopedia, was an inspiration to millions of people, and so the resources have come piling in. The way to get help is to create a vision that inspires people. In a period of 75 years, man went from using horses and wagons to landing on the moon. Why should it take 20 years to build something that is 99% doable today?

Many of the components of a space elevator are simple enough that college kids are building prototype elevators in their free time. The Elevator:2010 contest is sponsored by NASA, but while these contests have generated excitement and interest in the press, they are building toys, much like a radio-controlled airplane is a toy compared to a Boeing airliner.

I believe we could have a space elevator built in 7 years. If you divvy up five years of work per person, and add in a year to ramp up and test, you can see how seven years is quite reasonable. Man landed on the moon 7 years after Kennedy’s speech, exactly as he ordained, because dates can be self-fulfilling prophecies. It allows everyone to measure themselves against their goals, and determine if they need additional resources. If we decided we needed an elevator because our civilization had a threat of extermination, one could be built in a very short amount of time.

If the design of the hardware and the software were done in a public fashion, others could take the intermediate efforts and test them and improve them, therefore saving further engineering time. Perhaps NASA could come up with hundreds of truly useful research projects for college kids to help out on instead of encouraging them to build toys. There is a lot of software to be written and that can be started now.

The Unknown Unknown is the nanotubes, but nearly all the other pieces can be built without having any access to them. We will only need them wound into a big spool on the launch date.

I can imagine that any effort like this would get caught up in a tremendous amount of international political wrangling that could easily add years on to the project. We should not let this happen, and we should remind each other that the space elevator is just the railroad car to space — the exciting stuff is the cargo inside and the possibilities out there. A space elevator is not a zero sum endeavor: it would enable lots of other big projects that are totally unfeasible currently. A space elevator would enable various international space agencies that have money, but no great purpose, to work together on a large, shared goal. And as a side effect it would strengthen international relations.5


1 The Europeans aren’t providing great leadership either. One of the big investments of their Space agencies, besides the ISS, is to build a duplicate GPS satellite constellation, which they are doing primarily because of anti-Americanism! Too bad they don’t realize that their emotions are causing them to re-implement 35 year-old technology, instead of spending that $5 Billion on a truly new advancement. Cloning GPS in 2013: Quite an achievement, Europe!

2 Carbon is not a pollutant and is valuable. It is 18% of the mass of the human body, but only .03% of the mass of the Earth. If Carbon were more widespread, diamonds would be cheaper. Driving very fast cars is the best way to unlock the carbon we need. Anyone who thinks we are running out of energy doesn’t understand the algebra in E = mc2.

3 Mars’ moon, Phobos, is only 3,700 miles above Mars, and if we create an atmosphere, it will slow down and crash. We will need to find a place to crash the fragments, I suggest in one of the largest canyons we can find; we could put them next to a cross dipped in urine and call it the largest man-made art.

4 Fred Brooks’ The Mythical Man-Month argues that adding engineers late to a project makes a project later, but ramp-up time is just noise in the management of an engineering project. Also, wikis, search engines, and other technologies invented since his book have lowered the overhead of collaboration.

5 Perhaps the Europeans could build the station at GEO. Russia could build the shuttle craft to move cargo between the space elevator and the moon. The Middle East could provide an electrical grid for the moon. China could take on the problem of cleaning up the orbital space debris and build the first moon base. Africa could attack the problem of terraforming Mars, etc.

I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog. Here are several more sections on AI topics. I hope you find these pages food for thought and I appreciate any feedback.


The future is open source everything.

—Linus Torvalds

That knowledge has become the resource, rather than a resource, is what makes our society post-capitalist.

—Peter Drucker, 1993

Imagine 1,000 people, broken up into groups of five, working on two hundred separate encyclopedias, versus that same number of people working on one encyclopedia? Which one will be the best? This sounds like a silly analogy when described in the context of an encyclopedia, but it is exactly what is going on in artificial intelligence (AI) research today.1 Some say free software doesn’t work in theory, but it does work in practice. In truth, it “works” in proportion to the number of people who are working together, and their collective efficiency.

In early drafts of this book, I had positioned this chapter after the one explaining economic and legal issues around free software. However, I now believe it is important to discuss artificial intelligence separately and first, because AI is the holy-grail of computing, and the reason we haven’t solved AI is that there are no free software codebases that have gained critical mass. Far more than enough people are out there, but they are usually working in teams of one or two people, or proprietary codebases.

Deep Blue has been Deep-Sixed

Some people worry that artificial intelligence will make us feel inferior, but then, anybody in his right mind should have an inferiority complex every time he looks at a flower.

—Alan Kay, computer scientist

The source code for IBM’s Deep Blue, the first chess machine to beat then-reigning World Champion Gary Kasparov, was built by a team of about five people. That code has been languishing in a vault at IBM ever since because it was not created under a license that would enable further use by anyone, even though IBM is not attempting to make money from the code or using it for anything.

The second best chess engine in the world, Deep Junior, is also not free, and is therefore being worked on by a very small team. If we have only small teams of people attacking AI, or writing code and then locking it away, we are not going to make progress any time soon towards truly smart software.

Today’s chess computers have no true AI in them; they simply play moves, and then use human-created analysis to measure the result. If you were to go tweak the computer’s value for how much a queen is worth compared to a pawn, the machine would start losing and wouldn’t even understand why. It comes off as intelligent only because it has very smart chess experts programming the computer precisely how to analyze moves, and to rate the relative importance of pieces and their locations, etc.

Deep Blue could analyze two hundred million positions per second, compared to grandmasters who can analyze only 3 positions per second. Who is to say where that code might be today if chess AI aficionados around the world had been hacking on it for the last 10 years?

DARPA Grand Challenge

Proprietary software developers have the advantages money provides; free software developers need to make advantages for each other. I hope some day we will have a large collection of free libraries that have no parallel available to proprietary software, providing useful modules to serve as building blocks in new free software, and adding up to a major advantage for further free software development. What does society need? It needs information that is truly available to its citizens—for example, programs that people can read, fix, adapt, and improve, not just operate. But what software owners typically deliver is a black box that we can’t study or change.

—Richard Stallman

The hardest computing challenges we face are man-made: language, roads and spam. Take, for instance, robot-driven cars. We could do this without a vision system, and modify every road on the planet by adding driving rails or other guides for robot-driven cars, but it is much cheaper and safer to build software for cars to travel on roads as they exist today — a chaotic mess.

At the annual American Association for the Advancement of Science (AAAS) conference in February 2007, the “consensus” among the scientists was that we will have driverless cars by 2030. This prediction is meaningless because those working on the problem are not working together, just as those working on the best chess software are not working together. Furthermore, as American cancer researcher Sidney Farber has said, “Any man who predicts a date for discovery is no longer a scientist.”

Today, Lexus has a car that can parallel park itself, but its vision system needs only a very vague idea of the obstacles around it to accomplish this task. The challenge of building a robot-driven car rests in creating a vision system that makes sense of painted lines, freeway signs, and the other obstacles on the road, including dirtbags not following “the rules”.

The Defense Advanced Research Projects Agency (DARPA), which unlike Al Gore, really invented the Internet, has sponsored several contests to build robot-driven vehicles:


Stanley, Stanford University’s winning entry for the 2005 challenge. It might not run over a Stop sign, but it wouldn’t know to stop.

Like the parallel parking scenario, the DARPA Grand Challenge of 2004 required only a simple vision system. Competing cars traveled over a mostly empty dirt road and were given a detailed series of map points. Even so, many of the cars didn’t finish, or perform confidently. There is an expression in engineering called “garbage in, garbage out”; as such, if a car sees “poorly”, it drives poorly.

What was disappointing about the first challenge was that an enormous amount of software was written to operate these vehicles yet none of it has been released (especially the vision system) for others to review, comment on, improve, etc. I visited Stanford’s Stanley website and could find no link to the source code, or even information such as the programming language it was written in.

Some might wonder why people should work together in a contest, but if all the cars used rubber tires, Intel processors and the Linux kernel, would you say they were not competing? It is a race, with the fastest hardware and driving style winning in the end. By working together on some of the software, engineers can focus more on the hardware, which is the fun stuff.

The following is a description of the computer vision pipeline required to successfully operate a driverless car. Whereas Stanley’s entire software team involved only 12 part-time people, the vision software alone is a problem so complicated it will take an effort comparable in complexity to the Linux kernel to build it:

Image acquisition: Converting sensor inputs from 2 or more cameras, radar, heat, etc. into a 3-dimensional image sequence

Pre-processing: Noise reduction, contrast enhancement

Feature extraction: lines, edges, shape, motion

Detection/Segmentation: Find portions of the images that need further analysis (highway signs)

High-level processing: Data verification, text recognition, object analysis and categorization

The 5 stages of an image recognition pipeline.

A lot of software needs to be written in support of such a system:


The vision pipeline is the hardest part of creating a robot-driven car, but even such diagnostic software is non-trivial.

In 2007, there was a new DARPA Urban challenge. This is a sample of the information given to the contestants:


It is easier and safer to program a car to recognize a Stop sign than it is to point out the location of all of them.

Constructing a vision pipeline that can drive in an urban environment presents a much harder software problem. However, if you look at the vision requirements needed to solve the Urban Challenge, it is clear that recognizing shapes and motion is all that is required, and those are the same requirements as had existed in the 2004 challenge! But even in the 2007 contest, there was no more sharing than in the previous contest.

Once we develop the vision system, everything else is technically easy. Video games contain computer-controlled drivers that can race you while shooting and swearing at you. Their trick is that they already have detailed information about all of the objects in their simulated world.

After we’ve built a vision system, there are still many fun challenges to tackle: preparing for Congressional hearings to argue that these cars should have a speed limit controlled by the computer, or telling your car not to drive aggressively and spill your champagne, or testing and building confidence in such a system.2

Eventually, our roads will get smart. Once we have traffic information, we can have computers efficiently route vehicles around any congestion. A study found that traffic jams cost the average large city $1 billion dollars a year.

No organization today, including Microsoft and Google, contains hundreds of computer vision experts. Do you think GM would be gutsy enough to fund a team of 100 vision experts even if they thought they could corner this market?

There are enough people worldwide working on the vision problem right now. If we could pool their efforts into one codebase, written in a modern programming language, we could have robot-driven cars in five years. It is not a matter of invention, it is a matter of engineering.

1 One website documents 60 pieces of source code that perform Fourier transformations, which is an important software building block. The situation is the same for neural networks, computer vision, and many other advanced technologies.

2 There are various privacy issues inherent in robot-driven cars. When computers know their location, it becomes easy to build a “black box” that would record all this information and even transmit it to the government. We need to make sure that machines owned by a human stay under his control, and do not become controlled by the government without a court order and a compelling burden of proof.

This is a crosspost from Nextbigfuture

I looked at nuclear winter and city firestorms a few months ago I will summarize the case I made then in the next section. There is significant additions based on my further research and email exchanges that I had with Prof Alan Robock and Brian Toon who wrote the nuclear winter research.

The Steps needed to prove nuclear winter:
1. Prove that enough cities will have firestorms or big enough fires (the claim here is that does not happen)
2. Prove that when enough cities in a suffient area have big fire that enough smoke and soot gets into the stratosphere (trouble with this claim because of the Kuwait fires)
3. Prove that condition persists and effects climate as per models (others have questioned that but this issue is not addressed here

The nuclear winter case is predictated on getting 150 million tons (150 teragram case) of soot, smoke into the stratosphere and having it stay there. The assumption seemed to be that the cities will be targeted and the cities will burn in massive firestorms. Alan Robock indicated that they only included a fire based on the radius of ignition from the atmospheric blasts. However, in the scientific american article and in their 2007 paper the stated assumptions are:

assuming each fire would burn the same area that actually did burn in Hiroshima and assuming an amount of burnable material per person based on various studies.

The implicit assumption is that all buildings react the way the buildings in Hiroshima reacted on that day.

Therefore, the results of Hiroshima are assumed in the Nuclear Winter models.
* 27 days without rain
* with breakfast burners that overturned in the blast and set fires
* mostly wood and paper buildings
* Hiroshima had a firestorm and burned five times more than Nagasaki. Nagasaki was not the best fire resistant city. Nagasaki had the same wood and paper buildings and high population density.
Recommendations
Build only with non-combustible materials (cement and brick that is made fire resistant or specially treated wood). Make the roofs, floors and shingles non-combustible. Add fire retardants to any high volume material that could become fuel loading material. Look at city planning to ensure less fire risk for the city. Have a plan for putting out city wide fires (like controlled flood from dams which are already near cities.)

Continue reading “Nuclear Winter and Fire and Reducing Fire Risks to Cities” | >

I am a former Microsoft programmer who wrote a book (for a general audience) about the future of software called After the Software Wars. Eric Klien has invited me to post on this blog. Here is my section entitled “Software and the Singularity”. I hope you find this food for thought and I appreciate any feedback.


Futurists talk about the “Singularity”, the time when computational capacity will surpass the capacity of human intelligence. Ray Kurzweil predicts it will happen in 2045. Therefore, according to its proponents, the world will be amazing then.3 The flaw with such a date estimate, other than the fact that they are always prone to extreme error, is that continuous learning is not yet a part of the foundation. Any AI code lives in the fringes of the software stack and is either proprietary or written by small teams of programmers.

I believe the benefits inherent in the singularity will happen as soon as our software becomes “smart” and we don’t need to wait for any further Moore’s law progress for that to happen. Computers today can do billions of operations per second, like add 123,456,789 and 987,654,321. If you could do that calculation in your head in one second, it would take you 30 years to do the billion that your computer can do in that second.

Even if you don’t think computers have the necessary hardware horsepower today, understand that in many scenarios, the size of the input is the primary driving factor to the processing power required to do the analysis. In image recognition for example, the amount of work required to interpret an image is mostly a function of the size of the image. Each step in the image recognition pipeline, and the processes that take place in our brain, dramatically reduce the amount of data from the previous step. At the beginning of the analysis might be a one million pixel image, requiring 3 million bytes of memory. At the end of the analysis is the data that you are looking at your house, a concept that requires only 10s of bytes to represent. The first step, working on the raw image, requires the most processing power, so therefore it is the image resolution (and frame rate) that set the requirements, values that are trivial to change. No one has shown robust vision recognition software running at any speed, on any sized image!

While a brain is different from a computer in that it does work in parallel, such parallelization only makes it happen faster, it does not change the result. Anything accomplished in our parallel brain could also be accomplished on computers of today, which can do only one thing at a time, but at the rate of billions per second. A 1-gigahertz processor can do 1,000 different operations on a million pieces of data in one second. With such speed, you don’t even need multiple processors! Even so, more parallelism is coming.4

3 His prediction is that the number of computers, times their computational capacity, will surpass the number of humans, times their computational capacity, in 2045. This calculation seems flawed for several reasons:

  1. We will be swimming in computational capacity long before then. An intelligent agent twice as fast as the previous one is not necessarily more useful.
  2. Many of the neurons of the brain are not spent on reason, and so shouldn’t be in the calculations.
  3. Billions of humans are merely subsisting, and are not plugged into the global grid, and so shouldn’t be measured.
  4. There is no amount of continuous learning built in to today’s software.

Each of these would tend to push Singularity closer and support the argument that the benefits of singularity are not waiting on hardware. Humans make computers smarter, and computers make humans smarter, so this feedback loop is another reason that makes 2045 a meaningless moment in time.

4 Most computers today contain a dual-core CPU and processor folks promise that 10 and more are coming. Intel’s processors also have parallel processing capabilities known as MMX and SSE that is easily adapted to the work of the early stages of any analysis pipeline. Intel would add even more of this parallel processing support if applications put them to better use. Furthermore, graphics cards exist primarily to do work in parallel, and this hardware could be adapted to AI if it is not usable already.

With our growing resources, the Lifeboat Foundation has teamed with the Singularity Hub as Media Sponsors for the 2010 Humanity+ Summit. If you have suggestions on future events that we should sponsor, please contact [email protected].

The summer 2010 “Humanity+ @ Harvard — The Rise Of The Citizen Scientist” conference is being held, after the inaugural conference in Los Angeles in December 2009, on the East Coast, at Harvard University’s prestigious Science Hall on June 12–13. Futurist, inventor, and author of the NYT bestselling book “The Singularity Is Near”, Ray Kurzweil is going to be keynote speaker of the conference.

Also speaking at the H+ Summit @ Harvard is Aubrey de Grey, a biomedical gerontologist based in Cambridge, UK, and is the Chief Science Officer of SENS Foundation, a California-based charity dedicated to combating the aging process. His talk, “Hype and anti-hype in academic biogerontology research: a call to action”, will analyze the interplay of over-pessimistic and over-optimistic positions with regards of research and development of cures, and propose solutions to alleviate the negative effects of both.

The theme is “The Rise Of The Citizen Scientist”, as illustrated in his talk by Alex Lightman, Executive Director of Humanity+:

“Knowledge may be expanding exponentially, but the current rate of civilizational learning and institutional upgrading is still far too slow in the century of peak oil, peak uranium, and ‘peak everything’. Humanity needs to gather vastly more data as part of ever larger and more widespread scientific experiments, and make science and technology flourish in streets, fields, and homes as well as in university and corporate laboratories.”

Humanity+ Summit @ Harvard is an unmissable event for everyone who is interested in the evolution of the rapidly changing human condition, and the impact of accelerating technological change on the daily lives of individuals, and on our society as a whole. Tickets start at only $150, with an additional 50% discount for students registering with the coupon STUDENTDISCOUNT (valid student ID required at the time of admission).

With over 40 speakers, and 50 sessions in two jam packed days, the attendees, and the speakers will have many opportunities to interact, and discuss, complementing the conference with the necessary networking component.

Other speakers already listed on the H+ Summit program page include:

  • David Orban, Chairman of Humanity+: “Intelligence Augmentation, Decision Power, And The Emerging Data Sphere”
  • Heather Knight, CTO of Humanity+: “Why Robots Need to Spend More Time in the Limelight”
  • Andrew Hessel, Co-Chair at Singularity University: “Altered Carbon: The Emerging Biological Diamond Age”
  • M. A. Greenstein, Art Center College of Design: “Sparking our Neural Humanity with Neurotech!”
  • Michael Smolens, CEO of dotSUB: “Removing language as a barrier to cross cultural communication”

New speakers will be announced in rapid succession, rounding out a schedule that is guaranteed to inform, intrigue, stimulate and provoke, in moving ahead our planetary understanding of the evolution of the human condition!

H+ Summit @ Harvard — The Rise Of The Citizen Scientist
June 12–13, Harvard University
Cambridge, MA

You can register at http://www.eventbrite.com/event/648806598/friendsofhplus/4141206940.