Toggle light / dark theme

Strong AI or Artificial General Intelligence (AGI) stands for self-improving intelligent systems possessing the capacity to interact with theoretical- and real-world problems with a similar flexibility as an intelligent living being, but the performance and accuracy of a machine. Promising foundations for AGI exist in the current fields of stochastic- and cognitive science as well as traditional artificial intelligence. My aim in this post is to give a very basic insight into- and feeling for the issues involved in dealing with the complexity and universality of an AGI for a general readership.

Classical AI, such as machine learning algorithms and expert systems, are already heavily utilized in today’s real-world problems, in the form of mature machine learning algorithms, which may profitably exploit patterns in customer behaviour, find correlations in scientific data or even predict negotiation strategies, for example [1] [2], or in the form of genetic algorithms. With the next upcoming technology for organizing knowledge on the net, which is called the semantic web and deals with machine-interpretable understanding of words in the context of natural language, we may start inventing early parts of technology playing a role in the future development of AGI. Semantic approaches come from computer science, sociology and current AI research, but promise to describe and ‘understand’ real-world concepts and to enable our computers to build interfaces to real world concepts and coherences more autonomously. Actually getting from expert systems to AGI will require approaches to bootstrap self-improving systems and more research on cognition, but must also involve crucial security aspects. Institutions associated with this early research include the Singularity Institute [3] and the Lifeboat Foundation [4].

In the recent past, we had new kinds of security challenges: DoS attacks, eMail- and PDF-worms and a plethora of other malware, which sometimes even made it into military and other sensitive networks, and stole credit cards and private data en masse. These were and are among the first serious incidents related to the Internet. But still, all of these followed a narrow and predictable pattern, constrained by our current generation of PCs, (in-)security architecture, network protocols, software applications, and of course human flaws (e.g. the emotional response exploited by the “ILOVEYOU virus”). To understand the implications in strong AI first means to realize that probably there won’t be any human-predictable hardware, software, interfaces around for longer periods of time as long as AGI takes off hard enough.

To grasp the new security implications, it’s important to understand how insecurity can arise from the complexity of technological systems. The vast potential of complex systems oft makes their effects hard to predict for the human mind which is actually riddled with biases based on its biological evolution. For example, the application of the simplest mathematical equations can produce complex results hard to understand and predict by common sense. Cellular automata, for example, are simple rules for generating new dots, based on which dots, generated by the same rule, are observed in the previous step. Many of these rules can be encoded in as little as 4 letters (32 bits), and generate astounding complexity.

Cellular automaton, produced by a simple recursive formula

The Fibonacci sequence is another popular example of unexpected complexity. Based on a very short recursive equation, the sequence generates a pattern of incremental increase which can be visualized as a complex spiral pattern, resembling a snail house’s design and many other patterns in nature. A combination of Fibonacci spirals, for example, can resemble the motif of the head of a sunflower. A thorough understanding of this ‘simple’ Fibonacci sequence is also sufficient to model some fundamental but important dynamics of systems as complex as the stock market and the global economy.

Sunflower head showing a Fibonacci sequence pattern

Traditional software is many orders of magnitude higher in complexity than basic mathematical formulae, and thus many orders of magnitude less predictable. Artificial general intelligence may be expected to work with even more complex rules than low-level computer programs, of a comparable complexity as natural human language, which would classify it yet several orders of magnitude higher in complexity than traditional software. The estimated security implications are not yet researched systematically, but are likely as hard as one may expect now.

Practical security is not about achieving perfection, but about mitigation of risks to a minimum. A current consensus among strong AI researchers is that we can only improve the chances for an AI to be friendly, i.e. an AI acting in a secure manner and having a positive long-term effect on humanity rather than a negative one [5], and that this must be a crucial design aspect from the beginning on. Research into Friendly AI started out with a serious consideration of the Asimov Laws of robotics [6] and is based on the application of probabilistic models, cognitive science and social philosophy to AI research.

Many researchers who believe in the viability of AGI take it a step further and predict a technological singularity. Just like the assumed physical singularity that started our universe (the Big Bang), a technological singularity is expected to increase the rate of technological progress much more rapidly than what we are used to from the history of humanity, i.e. beyond the current ‘laws’ of progress. Another important notion associated with the singularity is that we cannot predict even the most fundamental changes occurring after it, because things would, by definition, progress faster than we are currently able to predict. Therefore, in a similar way in which we believe the creation of the universe depended on its initial condition (in the big bang case, the few physical constants from which the others can be derived), many researchers in this field believe that AI security strongly depends on the initial conditions as well, i.e. the design of the bootstrapping software. If we succeed in manufacturing a general-purpose decision-making mind, then its whole point would be self-modification and self-improvement. Hence, our direct control over it would be limited to its first iteration and the initial conditions of a strong AI, which could be influenced mostly by getting the initial iteration of its hard- and software design right.

Our approach to optimize those initial conditions must consist of working as careful as possible. Space technology is a useful example for this which points us into the general direction in which such development should go. In rocket science and space technology, all measurements and mathematical equations must be as precise as possible by our current technological standards. Also, multiple redundancies must be present for every system, since every single aspect of a system can be expected to fail. Despite this, many rocket launches still fail today, although we are steadily improving on error rates.

Additionally, humans interacting with an AGI may a major security risk themselves, as they may be convinced by an AGI to remove its limitations. Since an AGI can be expected to be very convincing if we expect it to exceed human intellect, we should not only focus on physical limitations, but making the AGI ‘friendly’. But even in designing this ‘friendliness’, the way our mind works is largely unprepared to deal with consequences of the complexity of an AGI, because the way we perceive and deal with potential issues and risks stems from evolution. As a product of natural evolution, our behaviour helps us dealing with animal predators, interacting in human societies and caring about our children, but not in anticipating the complexity of man-made machines. Natural behavioural traits of our human perception and cognition are a result of evolution, and are called cognitive biases.

Sadly, as helpful as they may be in natural (i.e., non-technological) environments, these are the very same behaviours which are often contra-productive when dealing with the unforeseeable complexity of our own technology and modern civilization. If you don’t really see the primary importance of cognitive biases to the security of future AI at this point, you’re probably in good company. But there are good reasons why this is a crucial issue that researchers, developers and users of future generations of general-purpose AI need to take into account. One of the major reason for founding the earlier-mentioned Singularity Institute for AI [3] was to get the basics right, including grasping the cognitive biases, which necessarily do influence the technological design of AGI.

What do these considerations practically imply for the design of strong AI? Some of the traditional IT security issues that need to be addressed in computer programs are: input validation, access limitations, avoiding buffer overflows, safe conversion of data types, setting resource limits, secure error handling. All of these are valid and important issues that must be addressed in any piece of software, including weak and strong AI. However, we must avoid underestimating the design goals for a strong AI, mitigating the risk on all levels from the beginning. To do this, we must care about more than the traditional IT security issues. An AGI will interface with the human mind, through text and direct communication and –interaction. Thus, we must also estimate the errors that we may not see, and do our best to be aware of flaws in human logic and cognitive biases, which may include:

  • Loss aversion: “the dis-utility of giving up an object is greater than the utility associated with acquiring it”.
  • Positive outcome bias: a tendency in prediction to overestimate the probability of good things happening to them
  • Bandwagon effect: the tendency to do (or believe) things because many other people do (or believe) the same.
  • Irrational escalation: the tendency to make irrational decisions based upon rational decisions in the past or to justify actions already taken.
  • Omission bias: the tendency to judge harmful actions as worse, or less moral, than equally harmful omissions (inactions).

Above cognitive biases are a modest selection from Wikipedia’s list [7], which knows over a hundred more. Struggling with some of the known cognitive biases in complex technological situations may be quite familiar to many of us, and the social components involved, from situations such as managing modern business processes to investing in the stock market. In fact, we should apply any general lessons learned from dealing with current technological complexity to AGI. For example, some of the most successful long-term investment strategies in the stock market are boring and strict, but based mostly on safety, such as Buffet’s margin of safety concept. With all factors gained from social and technological experience taken into account in an AGI design that strives to optimize both cognitive and IT security, its designers can still not afford to forget that perfect and complete security does remain an illusion.

References

[1] Chen, M., Chiu, A. & Chang, H., 2005. Mining changes in customer behavior in retail marketing. Expert Systems with Applications, 28(4), 773–781.
[2] Oliver, J., 1997. A Machine Learning Approach to Automated Negotiation and Prospects for Electronic Commerce. Available at: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.9115 [Accessed Feb 25, 2011].
[3] The Singularity Institute for Artificial intelligence: http://singinst.org/
[4] For the Lifeboat Foundation’s dedicated program, see: https://lifeboat.com/ex/ai.shield
[5] Yudkowsky, E. 2006. Artificial Intelligence as a Positive and Negative Factor in Global Risk., Global Catastrophic Risks, Oxford University Press, 2007.
[6] See http://en.wikipedia.org/wiki/Three_Laws_of_Robotics and http://en.wikipedia.org/wiki/Friendly_AI, Accessed Feb 25, 2011
[7] For a list of cognitive biases, see http://en.wikipedia.org/wiki/Cognitive_biases, Accessed Feb 25, 2011

It would be helpful to discuss these theoretical concepts because there could be significant practical and existential implications.

The Global Brain (GB) is an emergent world-wide entity of distributed intelligence, facilitated by communication and the meaningful interconnections between millions of humans via technology (such as the internet).

For my purposes I take it to mean the expressive integration of all (or the majority) of human brains through technology and communication, a Metasystem Transition from the human brain to a global (Earth) brain. The GB is truly global not only in geographical terms but also in function.

It has been suggested that the GB has clear analogies with the human brain. For example, the basic unit of the human brain (HB) is the neuron, whereas the basic unit of the GB is the human brain. Whilst the HB is space-restricted within our cranium, the GB is constrained within this planet. The HB contains several regions that have specific functions themselves, but are also connected to the whole (e.g. occipital cortex for vision, temporal cortex for auditory function, thalamus etc.). The GB contains several regions that have specific functions themselves, but are connected to the whole (e.g. search engines, governments, etc.).

Some specific analogies are:

1. The Broca’s area in the inferior frontal gyrus, associated with speech. This could be the equivalent of, say, Rubert Murdoch’s communication empire.
2. The motor cortex is the equivalent of the world-wide railway system.
3. The sensory system in the brain is the equivalent of all digital sensors, CCTV network, internet uploading facilities etc.

If we accept that the GB will eventually become fully operational (and this may happen within the next 40–50 years), then there could be severe repercussions on human evolution. Apart from the fact that we could be able to change our genetic make-up using technology (through synthetic biology or nanotechnology for example) there could be new evolutionary pressures that can help extend human lifespan to an indefinite degree.

Empirically, we find that there is a basic underlying law that allows neurons the same lifespan as their human host. If natural laws are universal, then I would expect the same law to operate in similar metasystems, i.e. in my analogy with humans being the basic operating units of the GB. In that case, I ask:

If, there is an axiom positing that individual units (neurons) within a brain must live as long as the brain itself, i.e. 100–120 years, then, the individual units (human brains and, therefore, whole humans) within a GB must live as long as the GB itself, i.e. indefinitely.

Humans will become so embedded and integrated into the GB’s virtual and real structures, that it may make more sense from the allocation of resources point of view, to maintain existing humans indefinitely, rather than eliminate them through ageing and create new ones, who would then need extra resources in order to re-integrate themselves into the GB.

The net result will be that humans will start experiencing an unprecedented prolongation of their lifespan, in an attempt by the GB to evolve to higher levels of complexity at a low thermodynamical cost.

Marios Kyriazis
http://www.elpistheory.info

Einstein saw that clocks located “more downstairs” in an accelerating rocket predictably tick slower. This was his “happiest thought” as he often said.

However,as everything looks normal on the lower floor, the normal-appearing photons generated there do actually have less mass-energy. So do all local masses there by general covariance, and hence also all associated charges down there.

The last two implications were overlooked for a century. “This cannot be,” more than 30 renowned scientists declared, to let a prestigious experiment with which they have ties appear innocuous.

This would make for an ideal script to movie makers and for a bonanza to metrologists. But why the political undertones above? Because, like the bomb, this new crumb from Einstein’s table has a potentially unbounded impact. Only if it gets appreciated within a few days time, all human beings — including the Egyptians — can breathe freely again.

This appreciation is vital for the planet — before the LHC machine at CERN will be re-ignited within a matter of days. No one at CERN disputes that the finding radically alters the safety equation. They only claim that the result is “absolute nonsense” and not even worth being discussed publicly.

CERN says “zero risk” of the planet getting shrunk to 2 cm in perhaps five years time — I say “8 percent risk” if the machine continues. This clearly deserves a mediating conference — as a judge strongly advised CERN on January 27, 2011 at a court hearing in Cologne, Germany (13 K 5693/08).

To insist on clarification about the “ultimate slow bomb at CERN” is a logical necessity. Is any couple in love or any parent NOT joining me in demanding the public safety conference before it is too late?

Otto E. Rossler, chaos researcher, University of Tubingen, Germany (For J.O.R.)

When examining the delicate balance that life on Earth hangs within, it is impossible not to consider the ongoing love/hate connection between our parent star, the sun, and our uniquely terraqueous home planet.

On one hand, Earth is situated so perfectly, so ideally, inside the sun’s habitable zone, that it is impossible not to esteem our parent star with a sense of ongoing gratitude. It is, after all, the onslaught of spectral rain, the sun’s seemingly limitless output of charged particles, which provide the initial spark to all terrestrial life.

Yet on another hand, during those brief moments of solar upheaval, when highly energetic Earth-directed ejecta threaten with destruction our precipitously perched technological infrastructure, one cannot help but eye with caution the potentially calamitous distance of only 93 million miles that our entire human population resides from this unpredictable stellar inferno.

On 6 February 2011, twin solar observational spacecraft STEREO aligned at opposite ends of the sun along Earth’s orbit, and for the first time in human history, offered scientists a complete 360-degree view of the sun. Since solar observation began hundreds of years ago, humanity has had available only one side of the sun in view at any given time, as it slowly completed a rotation every 27 days. First launched in 2006, the two STEREO satellites are glittering jewels among a growing crown of heliophysics science missions that aim to better understand solar dynamics, and for the next eight years, will offer this dual-sided view of our parent star.

In addition to providing the source of all energy to our home planet Earth, the sun occasionally spews from its active regions violent bursts of energy, known as coronal mass ejections(CMEs). These fast traveling clouds of ionized gas are responsible for lovely events like the aurorae borealis and australis, but beyond a certain point have been known to overload orbiting satellites, set fire to ground-based technological infrastructure, and even usher in widespread blackouts.

CMEs are natural occurrences and as well understood as ever thanks to the emerging perspective of our sun as a dynamic star. Though humanity has known for centuries that the solar cycle follows a more/less eleven-year ebb and flow, only recently has the scientific community effectively constellated a more complete picture as to how our sun’s subtle changes effect space weather and, unfortunately, how little we can feasibly contend with this legitimate global threat.

The massive solar storm that occurred on 1 September 1859 produced aurorae that were visible as far south as Hawai’i and Cuba, with similar effects observed around the South Pole. The Earth-directed CME took all of 17 hours to make the 93 million mile trek from the corona of our sun to the Earth’s atmosphere, due to an earlier CME that had cleared a nice path for its intra-stellar journey. The one saving grace of this massive space weather event was that the North American and European telegraph system was in its delicate infancy, in place for only 15 years prior. Nevertheless, telegraph pylons threw sparks, many of them burning, and telegraph paper worldwide caught fire spontaneously.

Considering the ambitious improvements in communications lines, electrical grids, and broadband networks that have been implemented since, humanity faces the threat of space weather on uneven footing. Large CME events are known to occur around every 500 years, based on ice core samples measured for high-energy proton radiation.

The CME event on 14 March 1989 overloaded the HydroQuebec transmission lines and caused the catastrophic collapse of an entire power gird. The resulting aurorae were visible as far south as Texas and Florida. The estimated cost was totaled in the hundreds of million of dollars. A later storm in August 1989 interfered with semiconductor functionality and trading was called off on the Toronto stock exchange.

Beginning in 1995 with the launch and deployment of The Solar Heliospheric Observatory (SOHO), through 2009 with the launch of SDO, the Solar Dynamics Observatory, and finally this year, with the launch of the Glory science mission, NASA is making ambitious, thoughtful strides to gain a clearer picture of the dynamics of the sun, to offer a better means to predict space weather, and evaluate more clearly both the great benefits and grave stellar threats.

Earth-bound technology infrastructure remains vulnerable to high-energy output from the sun. However, the growing array of orbiting satellites that the best and the brightest among modern science use to continually gather data from our dynamic star will offer humanity its best chance of modeling, predicting, and perhaps some day defending against the occasional outburst from our parent star.

Written by Zachary Urbina, Founder Cozy Dark

This is an email to the Linux kernel mailing list, but it relates to futurism topics so I post a copy here as well.
———
Science doesn’t always proceed at the speed of thought. It often proceeds at sociological or even demographic speed. — John Tooby

Open Letter to the LKML;

If we were already talking to our computers, etc. as we should be, I wouldn’t feel a need to write this to you. Given current rates of adoption, Linux still seems a generation away from being the priceless piece of free software useful to every child and PhD. This army your kernel enables has millions of people, but they often lose to smaller proprietary armies, because they are working inefficiently. My mail one year ago (http://keithcu.com/wordpress/?p=272) listed the biggest workitems, but I realize now I should have focused on one. In a sentence, I have discovered that we need GC lingua franca(s). (http://www.merriam-webster.com/dictionary/lingua%20franca)

Every Linux success builds momentum, but the desktop serves as a powerful daily reminder of the scientific tradition. Many software PhDs publish papers but not source, like Microsoft. I attended a human genomics conference and found that the biotech world is filled with proprietary software. IBM’s Jeopardy-playing Watson is proprietary, like Deep Blue was. This topic is not discussed in any of the news articles, as if the license does not matter. I find widespread fear of having ideas stolen in the software industry, and proprietary licenses encourage this. We need to get these paranoid programmers, hunched in the shadows, scribbled secrets clutched in their fists, working together, for any of them to succeed. Desktop world domination is not necessary, but it is sufficient to get robotic chaffeurs and butlers. Windows is not the biggest problem, it is the proprietary licensing model that has infected computing, and science.

There is, unsurprisingly, a consensus among kernel programmers that usermode is “a mess” today, which suggests there is a flaw in the Linux desktop programming paradigm. Consider the vast cosmic expanse of XML libraries in a Linux distribution. Like computer vision (http://www.cs.cmu.edu/~cil/v-source.html), there are not yet clear places for knowledge to accumulate. It is a shame that the kernel is so far ahead of most of the rest of user mode.

The most popular free computer vision codebase is OpenCV, but it is time-consuming to integrate because it defines an entire world in C++ down to the matrix class. Because C/C++ didn’t define a matrix, nor provide code, countless groups have created their own. It is easier to build your own computer vision library using standard classes that do math, I/O, and graphics, than to integrate OpenCV. Getting productive in that codebase is months of work and people want to see results before then. Building it is a chore, and they have lost users because of that. Progress in the OpenCV core is very slow because the barriers to entry are high. OpenCV has some machine learning code, but they would be better delegating that out to others. They are now doing CUDA optimizations they could get from elsewhere. They also have 3 Python wrappers and several other wrappers as well; many groups spend more time working on wrappers than the underlying code. Using the wrappers is fine if you only want to call the software, but if you want to improve OpenCV then the programming environment instantly becomes radically different and more complicated.

There is a team working on Strong AI called OpenCog, a C++ codebase created in 2001. They are evolving slowly as they do not have a constant stream of demos. They don’t consider their codebase is a small amount of world-changing ideas buried in engineering baggage like STL. Their GC language for small pieces is Scheme, an unpopular GC language in the FOSS community. Some in their group recommend Erlang. The OpenCog team looks at their core of C++, and over to OpenCV’s core of C++, and concludes the situation is fine. One of the biggest features of the ROS (Robot OS), according to its documentation, is a re-implementation of RPC in C++, not what robotics was missing. I’ve emailed various groups and all know of GC, but they are afraid of any decrease in performance, and they do not think they will ever save time. The transition from brooms to vacuum cleaners was disruptive, but we managed.

C/C++ makes it harder to share code amongst disparate scientists than a GC language. It doesn’t matter if there are lots of XML parsers or RSS readers, but it does matter if we don’t have an official computer vision codebase. This is not against any codebase or language, only for free software lingua franca(s) in certain places to enable faster knowledge accumulation. Even language researchers can improve and create variants of a common language, and tools can output it from other domains like math. Agreeing on a standard still gives us an uncountably infinite number of things to disagree over.

Because the kernel is written in C, you’ve strongly influenced the rest of community. C is fully acceptable for a mature kernel like Linux, but many concepts aren’t so clear in user mode. What is the UI of OpenOffice where speech input is the primary means of control? Many scientists don’t understand the difference between the stack and the heap. Software isn’t buildable if those with the necessary expertise can’t use the tools they are given.

C is a flawed language for user mode because it is missing GC, invented a decade earlier, and C++ added as much as it took away as each feature came with an added cost of complexity. C++ compilers converting to C was a good idea, but being a superset was not. C/C++ never died in user mode because there are now so many GC replacements, it created a situation paralyzing many to inaction, as there seems no clear place to go. Microsoft doesn’t have this confusion as their language, as of 2001, is C#. Microsoft is steadily moving to C#, but it is 10x easier to port a codebase like MySQL than SQL Server, which has an operating system inside. C# is taking over at the edges first, where innovation happens anyway. There is a competitive aspect to this.

Lots of free software technologies have multiple C/C++ implementations, because it is often easier to re-write than share, and an implementation in each GC language. We all might not agree on the solution, so let’s start by agreeing on the problem. A good example for GC is how a Mac port can go from weeks to hours. GC also prevents code from being able to use memory after freeing, free twice, etc. and therefore that user code is less likely to corrupt your memory hardware. If everyone in user mode were still writing in assembly language, you would obviously be concerned. If Git had been built in 98% Python and 2% C, it would have become easier to use faster, found ways to speed up Python, and set a good example. It doesn’t matter now, but it was an opportunity in 2005.

You can “leak” memory in GC, but that just means that you are still holding a reference. GC requires the system to have a fuller understanding of the code, which enables features like reflection. It is helpful to consider that GC is a step-up for programming like C was to assembly language. In Lisp the binary was the source code — Lisp is free by default. The Baby Boomer generation didn’t bring the tradition of science to computers, and the biggest legacy of this generation is if we remember it. Boomers gave us proprietary software, C, C++, Java, and the bankrupt welfare state. Lisp and GC were created / discovered by John McCarthy, a mathematician of the WW II greatest generation. He wrote that computers of 1974 were fast enough to do Strong AI. There were plenty of people working on it back then, but not in a group big enough to achieve critical mass. If they had, we’d know their names. If our scientists had been working together in free software and Lisp in 1959, the technology we would have developed by today would seem magical to us. The good news is that we have more scientists than we need.

There are a number of good languages, and it doesn’t matter too much what one is chosen, but it seems the Python family (Cython / PyPy) require the least amount of work to get what we need as it has the most extensive libraries: http://scipy.org/Topical_Software. I don’t argue the Python language and implementation is perfect, only good enough, like how the shape of the letters of the English language are good enough. Choosing and agreeing on a lingua franca will increase the results for the same amount of effort. No one has to understand the big picture, they just have to do their work in a place where knowledge can easily accumulate. A GC lingua franca isn’t a silver bullet, but it is the bottom piece of a solid science foundation and a powerful form of social engineering.

The most important thing is to get lingua franca(s) in key fields like computer vision and Strong AI. However, we should also consider a lingua franca for the Linux desktop. This will help, but not solve, the situation of the mass of Linux apps feeling dis-integrated. The Linux desktop is a lot harder because code here is 100x bigger than computer vision, and there is a lot of C/C++ in FOSS user mode today. In fact it seems hopeless to me, and I’m an optimist. It doesn’t matter; every team can move at a different pace. Many groups might not be able to finish a port for 5 years, but agreeing on a goal is more than half of the battle. The little groups can adopt it most quickly.

There are a lot of lurkers around codebases who want to contribute but don’t want to spend months getting up to speed on countless tedious things like learning a new error handling scheme. They would be happy to jump into a port as a way to get into a codebase. Unfortunately, many groups don’t encourage these efforts as they feel so busy. Many think today’s hardware is too slow, and that running any slower would doom the effort; they are impervious to the doublings and forget that algorithm performance matters most. A GC system may add a one-time cost of 5–20%, but it has the potential to be faster, and it gives people more time to work on performance. There are also real-time, incremental, and NUMA-aware collectors. The ultimate in performance is taking advantage of parallelism in specialized hardware like GPUs, and a GC language can handle that because it supports arbitrary bitfields.

Science moves at demographic speed when knowledge is not being reused among the existing scientists. A lingua franca makes more sense as more adopt it. That is why I send this message to the main address of the free software mothership. The kernel provides code and leadership, you have influence and the responsibility to lead the rest, who are like wandering ants. If I were Linus, I would threaten to quit Linux and get people going on AI wink There are many things you could do. I mostly want to bring this to your attention. Thank you for reading this.

I am posting a copy of this open letter on my blog as well (http://keithcu.com/wordpress/?p=1691). Reading the LKML for more than one week could be classified as torture under the Geneva conventions.

I believe that death due to ageing is not an absolute necessity of human nature. From the evolutionary point of view, we age because nature withholds energy for somatic (bodily) repairs and diverts it to the germ-cells (in order to assure the survival and evolution of the DNA). This is necessary so that the DNA is able to develop and achieve higher complexity.

Although this was a valid scenario until recently, we have now evolved to such a degree that we can use our intellect to achieve further cognitive complexity by manipulating our environment. This makes it unnecessary for the DNA to evolve along the path of natural selection (which is a slow and cumbersome, ‘hit-and-miss’ process), and allows us to develop quickly and more efficiently by using our brain as a means for achieving higher complexity. As a consequence, death through ageing becomes an illogical and unnecessary process. Humans must live much longer than the current lifespan of 80–120 years, in order for a more efficient global evolutionary development to take place.

It is possible to estimate how long the above process will take to mature (see figure below). Consider that the creation of the DNA was approximately 2 billion years ago, the formation of a neuron (cell) several million years ago, that of an effective brain (Homo sapiens sapiens) 200 000 years ago, and the establishment of complex societies (Ancient Greece, Rome, China etc.) thousands of years ago. There is a logarithmic reduction of the time necessary to proceed to the next more complex step (a reduction by a factor of 100). This means that global integration (and thus indefinite lifespans) will be achieved in a matter of decades (and certainly less than a century), starting from the 1960s-1970s (when globalisation in communications, travel and science/technology started to became established). This leaves another maximum of 50 years before the full global integration becomes established.

Each step is associated with a higher level of complexity, and takes a fraction of the timein order to mature, compared to the previous one.

1. DNA (organic life — molecules: billions of years)

2. Neuron (effective cells: millions of years)

3. Brain (complex organisms — Homo sapiens: thousands of years)

4. Society (formation of effective societies: several centuries)

5. Global Integration (formation of a ‘super-thinking entity’: several decades)

Step number 5 implies that humans who have already developed an advance state of cognitive complexity and sophistication will transcend the limits of evolution by natural selection, and therefore, by default, must not die through ageing. Their continual life is a necessary requirement of this new type of evolution.

For full details see:

https://acrobat.com/#d=MAgyT1rkdwono-lQL6thBQ

- submitted to the District Attorney of Tubingen, to the Administrative Court of Cologne, to the Federal Constitutional Court (BVerfG) of Germany, to the International Court for Crimes Against Humanity, and to the Security Council of the United Nations -

by Otto E. Rössler, Institute for Physical and Theoretical Chemistry, University of Tubingen, Auf der Morgenstelle A, 72076 Tubingen, Germany

The results of my group represent fundamental research in the fields of general relativity, quantum mechanics and chaos theory. Several independent findings obtained in these disciplines do jointly point to a danger — almost as if Nature had posed a trap for humankind if not watching out.

MAIN RESULT. It concerns BLACK HOLES and consists of 10 sub-results

Black holes are different than previously thought and still presupposed by experimentalists. It is much as it was with the case with the Eniwetak hydrogen bomb test where incorrect physical calculations caused a catastrophe — fortunately a localized one at the time. Four Tubingen theorems (gothic-R theorem, TeLeMaCh theorem, miniquasar theorem, superfluidity theorem) entail 10 new consequences:

1) Black holes DO NOT EVAPORATE — hence they can only grow.

2) Artificial black holes generated at the LHC thus are undetectable at first.

3) Black holes are uncharged, so the faster majority pass right through the earth’s and the sun’s matter.

4) Only the slowest artificial ones — below 11 km/sec — will stay inside earth.

5) Inside matter, a resident black hole will not grow linerally but rather — via self-organization — form a so-called “miniquasar”: an electro-gravitational engine that grows exponentially, hence shrinking the earth to 2 cm in a few years time.

6) Since black holes are uncharged, charged elementary particles conversely can no longer be maximally small (“point-shaped”). Hence space is “bored open” in the small as predicted by the string and loop theories.

7) Therefore, the probability of black holes being generated by the LHC experiment is heavily boosted up to about 10 percent at the energy of 7 and (planned soon) 8 TeV.

8) This high probability was apparently not yet reached in 2010, since the originally planned cumulative luminosity was not achieved. But the higher-energetic second phase of proton collisions, scheduled to start in February 2011, is bound to reach that level.

9) Black holes produced in natural particle collisions (cosmic ray protons colliding with surface protons of celestial bodies including earth) are much too fast to get stuck inside matter and hence are innocuous.

10) The only exception is ultra-dense neutron stars. However, their super-fluid “core” is frictionless by virtue of quantum mechanics. Ultra-fast mini black holes that get stuck in the “crust” can grow there only to a limited weight before sinking into the core — where they stop growing. Hence the empirical persistence of neutron stars is NOT a safety guarantee as CERN claims.

MAIN QUESTION: Why do the CERN representatives disregard the above results? (Ten possible reasons)

1, The novelty of those results.

2, The limited dissemination of the above results. So far, only three pertinent papers have appeared in print, two in conference proceedings in July 2008 and one in an online science journal in 2010. CERN never quoted these results sent to it first as preprints, in its “safety reports” (never updated for two and a half years). The more recent relevant results are still confined to the Internet.

3, The a priori improbability that several results stemming from independent areas of science would “conspire” to form a threat rather than cancel out in this respect. There seems to be no historical precedent for this.

4, The decades-long intervals between new results in general relativity make sure that new findings meet with maximum skepticism at first.

5, One finding — the unchargedness result (Ch in TeLeMaCh) — dethrones a two centuries old physical law, that of charge conservation.

6, The fact that the large planetary community of string theorists suddenly hold an “almost too good” result in their hands paradoxically causes them to keep a low profile rather than triumph.

7, The waned spirit of progress in fundamental physics after its results too often proved to be “Greek gifts.”

8, The LHC experiment is the largest and most tightly knit collective scientific effort of history.

9, A fear to lose sponsors and political support for subsequent mega-projects if admitting a potential safety gap.

10, The world-wide adoption of high-school type undergraduate curricula in place of the previous self-responsible style of studying, which has the side effect that collective authority acquires an undue weight.

SOCIETY’S FAILURE

Why has the “scientific safety conference,” publicly demanded on April 18, 2008, not been taken up by any grouping on the planet? Nothing but FALSIFICATION of the presented scientific results was and is being asked. Falsification of a single one will wipe out the danger. A week of discussing might suffice to reach a consensus.

Neither politics nor the media have realized up until now that not a single visible scientist on the planet assumes responsibility for the alleged falsity of the results presented. in particular, no individual stands up to defend his disproved counterclaims (the number of specialists who entered the ring in the first place can be counted on one hand). This simple fact — not a single open adversary — escaped the attention of a media person or politician up until now.

Neither group dares confront a worldwide interest lobby even though it is not money for once that is at stake but only borrowed authority. Almost so as if the grand old men of science of the 20th century had left no successors nor had the gifted philosophers and writers (I exempt Paul Virilio). Bringing oneself up-to-date on a given topic paradoxically seems impaired in the age of the Internet.

Thus there are no culprits? None except for myself who wrongly thought that painful words (like “risk of planetocaust”) could have a wake-up effect at the last moment. The real reason for the delayed global awakening to the danger may lie with this communication error made by someone who knows how it is to lose a child. In the second place, my personal friends Lorenz, von Weizsacker, Wheeler and DeWitt are no longer among us.

CONCLUSIONS

I therefore appeal to the above called-upon high legal and political bodies to rapidly rule that the long overdue scientific safety conference take place before the LHC experiment is allowed to resume in mid-February 2011. Or in the case of a delay of the conference beyond that date, to prohibit resumption of the experiment before the
conference has taken place.

I reckon with the fact that I will make a terrible fool of myself if at long last a scientist succeeds in falsifying a single one of the above 10 scientific findings (or 4 theorems). This is my risk and my hope at the same time. I ask the world’s forgiveness for my insisting that my possibly deficient state of knowledge be set straight before the largest experiment of history can continue.

However, the youngest ship’s boy in the crow’s nest who believes he recognizes something on the horizon has the acknowledged duty to insist on his getting a hearing. I humbly ask the high bodies mentioned not to hold this fact against me and to rule in accordance with my proposition: First clarification, then continuation. Otherwise, it would be madness even if in retrospect it proved innocuous. Would it not?

Sincerely yours,

Otto E. Rössler, Chaos Researcher
2011/01/14
(For J.O.R.)

The UK’s Observer just put out a set of predictions for the next 25 years (20 predictions for the next 25 years). I will react to each of them individually. More generally, however, these are the kinds of ideas that get headlines, but they don’t constitute good journalism. Scenario planning should be used in all predictive coverage. It is, to me, the most honest way to admit not knowing and documenting the uncertainties of the future—the best way to examine big issues through different lenses. Some of these predictions may well come to pass, but many will not. What this article fails to do, is inform the reader about the ways the predictions may vary from the best guess, and what the possible alternatives may be—and where they simply don’t know.

1. Geopolitics: ‘Rivals will take greater risks against the US’

This is a pretty non-predictive prediction. America’s rivals are already challenging its monetary policy, human rights stances, shipping channels and trade policies. The article states that the US will remain the world’s major power. It does not suggest that Globalization could fracture the world so much that regional powers huddle against the US in various places, essentially creating stagnation and a new localism that causes us to reinvent all economies. It also does not foresee anyone acting on the water rights, food, energy or nuclear proliferation. Any of those could set off major conflicts that completely disrupt our economic and political models, leading to major resets in assumptions about the future.

2. The UK economy: ‘The popular revolt against bankers will become impossible to resist’

British banks will not fall without taking much of the world financial systems with them. I like the idea of the reinvention of financial systems, though I think it is far too early to predict their shape. Banking is a major force that will evolve in emergent ways. For scenario planners, the uncertainty is about the fate of existing financial systems. Planners would do well to imagine multiple ways the institution of banking will reshape itself, not prematurely bet on any one outcome.

3. Global development: ‘A vaccine will rid the world of AIDS’

We can only hope so. Investment is high, but it is not the major cause of death in the world. Other infectious and parasitic diseases still outstrip HIV/AIDS by a large margin, while cardiovascular diseases and cancer even eclipse those. So it is great to predict the end of one disease, but the prediction seems rather arbitrary. I think it would be more advantageous to rate various research programs against potential outcomes over the next 25 years and look at the impact of curing those diseases on different parts of the world. If we tackle, for instance, HIV/AIDS and malaria and diarrhea diseases, what would that do to increase the safety of people in Africa and Asia? What would the economic and political ramifications be? We also have to consider the cost of the cure and the cost of its distribution. Low cost solutions that can easily be distributed will have higher impact than higher cost solutions that limit access (as we have with current HIV/AIDS treatments) I think we will see multiple breakthroughs over the next 25 years and we would do well to imagine the implications of sets of those, not focus on just one.

4. Energy: ‘Returning to a world that relies on muscle power is not an option’

For futurists, any suggestion that the world moves in reverse is an anathema. For scenario planners, we know that great powers have devolved over the last 2,000 years and there is no reason that some political, technological or environmental issue might not arise that would cause our global reality to reset itself in significant ways. I think it is naïve to say we won’t return to muscle power. In fact, the failure to meet global demand for energy and food may actually move us toward a more local view of energy and food production, one that is less automated and scalable. One of the reasons we have predictions like this is because we haven’t yet envisioned a language for sustainable economics that allows people to talk about the world outside of the bounds of industrial age, scale-level terms. It may well be our penchant for holding on to industrial age models that drives us to the brink. Rather than continuing to figure out how to scale up the world, perhaps we should be thinking about ways to slow it down, restructure it and create models that are sustainable over long periods of time. The green movement is just political window dressing for what is really a more fundamental need to seek sustainability in all aspects of life, and that starts with how we measure success.

5. Advertising: ‘All sorts of things will just be sold in plain packages’

This is just a sort of random prediction that doesn’t seem to mean anything if it happens. I’m not sure the state will control what is advertised, or if people will care how their stuff is packaged. In 4, above, I outline more important issues that would cause us to rethink our overall consumer mentality. If that happens, we may well see world where advertising is irrelevant—completely irrelevant. Let’s see how Madison Avenue plans for its demise (or its new role…) in a sustainable knowledge economy.

6. Neuroscience: ‘We’ll be able to plug information streams directly into the cortex’

This is already possible on a small scale. We have seen hardware interfaces with bugs and birds. The question is, will it be a novelty or will it be a major medical tool or will it be commonplace and accessible or will it be seen as dangerous and be shunned by citizen regulators worried about giving up their humanity and banned by governments who can’t imagine governing the overly connected. Just because we can doesn’t mean we will or we should. I certainly think we may we see a singularity over the next 25 years in hardware, where machines can match human computational power, but I think software will greatly lag hardware. We may be able to connect, but we will do so only had rudimentary levels. On the other hand, a new paradigm for software could evolve that would let machines match us thought for thought. I put that in the black swan category. I am on constant watch for a software genius that will make Gates and Zuckerberg look like quaint 18th-Century industrialists. The next revolution in software will come from a few potential paths, here are two: removal to the barriers to entry that the software industry has created and a return to more accessible computing for the masses (where they develop applications, not just consume content) or a breakthrough in distributed, parallel processing that evolves the ability to match human pattern recognition capabilities, even if the approach appears alien to it inventors. We will have a true artificial intelligence only when we no longer understand the engineering behind its abilities.

7. Physics: ‘Within a decade, we’ll know what dark matter is’

Maybe, but we may also find that dark matter, like the “ether” is just a conceptual plug-in for an incomplete model of the universe. I guess saying that it is a conceptual plug-in for an incomplete model would be an explanation of what it is – so this is one of those predictions that can’t lose. Another perspective: dark matter matters, and not only do we understand what it is, but what it means, and it changes our fundamental view of physics in a way that helps us look at matter and energy through a new lens, one that may help fuel a revolution in energy production and consumption.

8. Food: ‘Russia will become a global food superpower’

Really? Well, this presumes some commercial normality for Russia along with maintaining its risk taking propensity to remove the safeties from technology. If Russia becomes politically stable and economically safe (you can go there without fear for your personal or economic life) then perhaps. I think, however, that this predication is too finite and pointed. We could well see the US, China (or other parts of Asia) or even a terraformed Africa become the major food supplier – biotechnology, perhaps – new forms of distributed farming, also possible. The answer may not be hub-and-spoke, but distributed. We may find our own center locally as the costs of moving food around the world outweighs the industrialization efficiency of its production. It may prove healthier and more efficient to forgo the abundant variety we have become accustomed to (in some parts of the world) and see food again as nutrition, and share the lessons of efficient local production with the increasingly water starved world.

9. Nanotechnology: ‘Privacy will be a quaint obsession’

I don’t get the link between nanotechnology and privacy. It is mentioned once in the narrative, but not in an explanatory way. As a purely hardware technology, it will threaten health (nano-pollutants) and improve health (cellular-level, molecular-level repairs). The bigger issue with nanotechnology is its computational model. If nanotechnology includes the procreation and evolution of artificial things, then we are faced with the difficult challenge of trying to imagine how something will evolve that we have never seen before, and that has never existed in nature. The interplay between nature and nanotechnology will be fascinating and perhaps frightening. Our privacy may be challenged by culture and by software, but I seriously doubt that nanotechnology will be the key to decrypting our banking system (though it could play a role). Nanotechnology is more likely to be a black swan full of surprises that we can’t even begin to imagine today.

10. Gaming: ‘We’ll play games to solve problems’

This one is easy. Of course. We always have and we always will. Problem solutions are games to those who find passion in different problem sets. The difference between a game and a chore is perspective, not the task itself. For a mathematician, solving a quadratic equation is a game. For a literature major, that same equation may be seen as a chore. Taken to the next level, gaming may become a new way to engage with work. We often engineer fun out of work, and that is a shame. We should engineer work experiences to include fun as part of the experience (see my new book, Management by Design), and I don’t mean morale events. If you don’t enjoy your “work” then you will be dissatisfied no matter how much you are paid. Thinking about work as a game, as Ender (Enders Game, Orson Scott Card) did, changes the relationship between work and life. Ender, however, found out, that when you get too far removed from the reality, you may find moral compasses misaligned.

11. Web/internet: ‘Quantum computing is the future’

Quantum computing, like nanotechnology, will change fundamental rules, so it is hard to predict their outcome. We will do better to closely monitor developments than to spend time overspeculating on outcomes that are probably unimaginable. It is better to accept that there are things in the future that are unimaginable now and practice how to deal with unimaginable as an idea than to frustrate ourselves by trying to predict those outcomes. Imagine wicked fast computers—doesn’t really matter if they are quantum or not. Imagine machines that can decrypt anything really quickly using traditional methods, and that create new encryptions that they can’t solve themselves.

On the more mundane note in this article, the issues of net neutrality may play out so that those who pay more get more, though I suspect that will be uneven and change at the whim of politics. What I find curious is that this prediction says nothing about the alternative Internet (see my post Pirates Pine for Alternative Internet on Internet Evolution). I think we should also plan for very different information models and more data-centric interaction—in other words, we may we find ourselves talking to data rather than servers in the future.

I’m not sure the next Internet will come from Waterloo, Ontario and its physicists, but from acts of random assertions by smart, tech-savvy idealists who want to take back our intellectual backbone from advertisers and cable companies.

One black swan this prediction fails to account for is the possibility of a loss of trust in the Internet all together if it is hacked or otherwise challenged (by a virus, or made unstable by an attack on power grids or network routers). Cloud computing is based on trust. Microsoft and Google recently touted the uptime of their business offerings (Microsoft: BPOS Components Average 99.9-plus Percent Uptime). If some nefarious group takes that as a challenge (or sees the integrity of banking transactions as a challenge), we could see widespread distrust of the Net and the Cloud and a rapid return to closed, proprietary, non-homogeneous systems that confound hackers by their variety as much as they confound those who operate them.

12. Fashion: ‘Technology creates smarter clothes’

A model on the catwalk during the Gareth Pugh show at London Fashion Week in 2008. Photograph: Leon Neal/AFP/Getty Images

Smarter perhaps, put from the picture above which, not necessarily fashion forward. I think we will see technology integrated with what we wear, and I think smart materials will also redefine other aspects of our lives and create a new manufacturing industry, even in places where manufacturing has been displaced. In the US, for instance, smart materials will not require retrofitting legacy manufacturing facilities, but will require the creation of entirely new facilities that can be created with design and sustainability from their onset. However, smart clothes, other uses of smart materials and personal technology integration all require a continued positive connection between people and technology. That connection looks positive, but we may be be blind to technology push-backs, even rebellions, fostered in current events like the jobless recovery.

13. Nature: ‘We’ll redefine the wild’

I like this one and think it is inevitable, but I also think it is a rather easy prediction to make. It is less easy to see all the ways nature could be redefined. Professor Mace predicts managed protected areas and a continued loss of biodiversity. I think we are at a transition point, and 25 years isn’t enough time to see its conclusion. The rapid influx of “invasive” species with indigenous species creates not just displacement, but offer an opportunity for recreation of environments (read evolution). We have to remember that historically the areas we are trying to protect were very different in the past than they are in our rather short collective memories. We are trying to protect a moment in history for human nostalgia. The changes in the environment presage other changes that may well take place after we have gone. Come to Earth a 1,000 years from now and we may be hard pressed to find anything that is as we experience it today. The general landscape may appear the same at the highest level of fractal magnification, but zoom in and you will find the details will shifted as much as the forests of Europe or the nesting grounds of the Dodo bird have changed over the last 1,000 years.

14. Architecture: What constitutes a ‘city’ will change

I like this prediction because it runs the gamut from distribution of power to returning to caves. It actually represents the idea using scenario thinking. I will keep this brief because Rowan Moore gets it when he writes: “To be optimistic, the human genius for inventing social structures will mean that new forms of settlement we can’t quite imagine will begin to emerge.”

15. Sport: ‘Broadcasts will use holograms’

I guess in a sustainable knowledge economy we will still have sport. I hope we figure out how to monitor the progress of our favorite teams without the creation and collection of non-biodegradable artifacts like Styrofoam number one hands and collectable beverage containers.

As for sport itself, it will be an early adopter of any new broadcast technology. I’m not sure holograms in their traditional sense will be one, however. I’m guessing we figure out 3-D with a lot less technology than holograms require.

I challenge Mr. Lee’s statements on the acceptance of performance-enhancing drugs: “I don’t think we’ll see acceptance as the trend has been towards zero tolerance and long may it remain so.” I think it is just as likely that we start seeing performance enhancement as OK, given the wide proliferation of AD/HD drugs being prescribed, as well as those being used off label for mental enhancement—not to mention the accepted use of drugs by the military (see Troops need to remember, New Scientist, 09 December 2010). I think we may well see an asterisk in the record books a decade or so from now that says, “at this point we realized sport was entertainment, and allowed the use of drugs, prosthetics and other enhancements that increased performance and entertainment value.”

16. Transport: ‘There will be more automated cars’

Yes, if we still have cars, they will likely be more automated. And in a decade, we will likely still see cars, but we may be at the transition point for the adoption of a sustainable knowledge economy where cars start to look arcane. We will see continued tension between the old industrial sectors typified by automobile manufacturers and oil exploration and refining companies, and the technology and healthcare firms that see value and profits in more local ways of staying connected and ways to move that don’t involve internal combustion engines (or electric ones for that matter).

17. Health: ‘We’ll feel less healthy’

Maybe, as Mulgan points out, healthcare isn’t radical, but people can be radical. These uncertainties around health could come down to personal choice. We may find millions of excuses for not taking care of ourselves and then placing the burden of our unhealthy lifestyles at the feet of the public sector, or we may figure out that we are part of the sustainable equation as well. The later would transform healthcare. Some of the arguments above, about distribution and localism may also challenge the monolithic hospitals to become more distributed, as we are seeing with the rise of community-based clinics in the US and Europe. Management of healthcare may remain centralized, but delivery may be more decentralized. Of course, if economies continue to teeter, the state will assert itself and keep everything close and in as few buildings as possible.

As for electronic records, it will be the value to the end user that drives adoption. As soon as patients believe they need an electronic healthcare record as much as they need a little blue pill, we will see the adoption of the healthcare record. Until then, let the professionals do whatever they need to do to service me—the less I know the better. In a sustainable knowledge economy though, I will run my own analytics and use the results to inform my choices and actions. Perhaps we need healthcare analytics companies to start advertising to consumers as much as pharmaceutical companies currently do.

18. Religion: ‘Secularists will flatter to deceive’

I think religion may well see traditions fall, new forms emerge and fundamentalist dig in their heels. Religion offers social benefits that will be augmented by social media—religion acts as a pervasive and public filter for certain beliefs and cultural norms in a way that other associations do not. Over the next 25 years many of the more progressive religious movements may tap into their social side and reinvent themselves around association of people rather than affiliation with tenets of faith. If however, any of the dire scenarios come to pass, look for state asserted use of religion to increase, and for a rising tide of fundamentalism as people try to hold on to what they can of the old way of doing things.

19. Theatre: ‘Cuts could force a new political fringe’

Theatre has always had an edge, and any new fringe movement is likely to find it manifestation in art, be it theatre, song, poetry or painting. I would have preferred that the idea of art be taken up as a predication rather than theatre in isolation. If we continue to automate and displace workers, we will need to reassess our general abandonment of the arts as a way of making a living because creation will be the one thing that can’t be automated. We will need to find ways to pay people for human endeavors, everything from teaching to writing poetry. The fringe may turn out to be the way people stay engaged.

20 Storytelling: ‘Eventually there’ll be a Twitter classic’

Stories are already ubiquitous. We live in stories. Technology has changed our narrative form, not our longing for a narrative. The twitter stream is a narrative channel. I would not, however, anticipate a “twitter classic” because a classic suggests the idea of something lasting. For a “twitter classic” to occur, the 140-character phrases would need to be extracted from their medium and held someplace beyond the context is which they were created, which would make twitter just another version of the typewriter or word processor—either that or Twitter figures out a better mode for persistent retrieval of tweets with associated metadata—in others words, you could query the story out of the twitter-verse, which is very technically possible (and may make for some collaborative branching as well). But in the end, twitter is just a repository for writing, just one of many, which doesn’t make this prediction all that concept shattering.

This post is long enough, so I won’t start listing all of the areas the Guardian failed to tackle, or its internal lack of categorical consistency (e.g., Theatre and storytelling are two sides of the same idea). I hope these observations help you engage more deeply with these ideas and with the future more generally, but most importantly, I hope they help you think about navigating the next 25 years, not relying on prescience from people with no more insight than you and I. The trick with the future is to be nimble, not to be right.


What do Singularitarianism and popular Western religion have in common? More than you might imagine. A thumbnail evaluation of both ends of the American technocentric intelligence spectrum reveals both remarkable similarities in their respective narrative constructions and, naturally, amusing disparities. It would appear that all humans, regardless of our respective beliefs, seem to express goal-oriented hardwiring that demands a neatly constructed story to frame our experiences.

Be you a technophile, you are eagerly awaiting, with perhaps equal parts hope and fear, the moment when artificial general intelligence surpasses human intelligence. You don’t know exactly how this new, more cunning intelligence will react to humans, but you’re fairly certain that humanity might well be in a bit of trouble, or at very least, have some unique competition.

Be you a technophobe, you shun the trappings of in-depth science and technology involvement, save for a superficial interaction with the rudimentary elements of technology which likely do not extend much further than your home computer, cell phone, automobile, and/or microwave oven. As a technophobe, you might even consider yourself religious, and if you’re a Christian, you might well be waiting for the second-coming, the rapture.

Both scenarios lead humanity to ironically similar destinations, in which humankind becomes either marginalized or largely vestigial.

It’s difficult to parse either eventuality with observant members of the other’s belief system. If you ask a group of technophiles what they think of the idea of the rapture you will likely be laughed at or drown in tidal wave of atheist drool. The very thought of some magical force eviscerating an entire religious population in one eschatological fell swoop might be too much for some science and tech geeks, and medical attention, or at the very least a warehouse-quantity dose of smelling salts, might be in order.

Conversely, to the religiously observant, the notion of the singularity might for them, exist in terms too technical to even theoretically digest or represent something entirely dark or sinister that seems to fulfill their own belief system’s end game, a kind of techno-holocaust that reifies their purported faith.

The objective reality of both scenarios will be very different than either envisioned teleologies. Reality’s shades of gray of have a way of making foolish even the wisest individual’s predictions.

In my personal life, I too believed that the publication of my latest and most ambitious work, explaining the decidedly broad-scope Parent Star Theory would also constitute an end result of significant consequence, much like the popular narrative surrounding the moment of the singularity; that some great finish line was reached. The truth, however, is that just like the singularity, my own narrative-ized moment was not a precisely secured end, but a distinct moments of beginning, of conception and commitment. Not an arrival but a departure; a bold embarkation without clear end in sight.

Rather than answers, the coming singularity should provoke additional questions. How do we proceed? Where do we go from here? If the fundamental rules in the calculus of the human equation are changing, then how must we adapt? If the next stage of humanity exists on a post-scarcity planet, what then will be our larger goals, our new quest as a global human force?

Humanity must recognize that the idea of a narrative is indeed useful, so long as that narrative maintains some aspect of open-endedness. We might well need that consequential beginning-middle-end, if only to be reminded that each end most often leads to a new beginning.

Written by Zachary Urbina, Founder, Cozy Dark

Transhumanists are into improvements, and many talk about specific problems, for instance Nick Bostrom. However, Bostrom’s problem statements have been criticized for not necessarily being problems, and I think largely this is why one must consider the problem definition (see step #2 below).

Sometimes people talk about their “solutions” for problems, for instance this one in H+ Magazine. But in many cases they are actually talking about their ideas of how to solve a problem, or making science-fictional predictions. So if you surf the web, you will find a lot of good ideas about possibly important problems—but a lot of what you find will be undefined (or not very well defined) problem ideas and solutions.

These proposed solutions often do not attempt to find root causes or assume the wrong root cause. And finding a realistic complete plan for solving a problem is rare.

8D (Eight Disciplines) is a process used in various industries for problem solving and process improvement. The 8D steps described below could be very useful for transhumanists, not just for talking about problems but for actually implementing solutions in real life.

Transhuman concerns are complex not just technologically, but also socioculturally. Some problems are more than just “a” problem—they are a dynamic system of problems and the process for problem solving itself is not enough. There has to be management, goals, etc., most of which is outside the scope of this article. But first one should know how deal with a single problem before scaling up, and 8D is a process that can be used on a huge variety of complex problems.

Here are the eight steps of 8D:

  1. Assemble the team
  2. Define the problem
  3. Contain the problem
  4. Root cause analysis
  5. Choose the permanent solution
  6. Implement the solution and verify it
  7. Prevent recurrence
  8. Congratulate the team

More detailed descriptions:

1. Assemble the Team

Are we prepared for this?

With an initial, rough concept of the problem, a team should be assembled to continue the 8D steps. The team will make an initial problem statement without presupposing a solution. They should attempt to define the “gap” (or error)—the big difference between the current problematic situation and the potential fixed situation. The team members should all be interested in closing this gap.

The team must have a leader; this leader makes agendas, synchronizes actions and communications, resolves conflicts, etc. In a company, the team should also have a “sponsor”, who is like a coach from upper management. The rest of the team is assembled as appropriate; this will vary depending on the problem, but some general rules for a candidate can be:

  • Has a unique point of view.
  • Logistically able to coordinate with the rest of the team.
  • Is not committed to preconceived notions of “the answer.”
  • Can actually accomplish change that they might be responsible for.

The size of an 8D team (at least in companies) is typically 5 to 7 people.

The team should be justified. This matters most within an organization that is paying for the team, however even a group of transhumanists out in the wilds of cyberspace will have to defend themselves when people ask, “Why should we care?”

2. Define the Problem

What is the problem here?

Let’s say somebody throws my robot out of an airplane, and it immediately falls to the ground and breaks into several pieces. This customer then informs me that this robot has a major problem when flying after being dropped from a plane and that I should improve the flying software to fix it.

Here is the mistake: The problem has not been properly defined. The robot is a ground robot and was not intended to fly or be dropped out of a plane. The real problem is that a customer has been misinformed as to the purpose and use of the product.

When thinking about how to improve humanity, or even how to merely improve a gadget, you should consider: Have you made an assumption about the issue that might be obscuring the true problem? Did the problem emerge from a process that was working fine before? What processes will be impacted? If this is an improvement, can it be measured, and what is the expected goal?

The team should attempt to grok the issues and their magnitude. Ideally, they will be informed with data, not just opinions.

Just as with medical diagnosis, the symptoms alone are probably not enough input. There are various ways to collect more data, and which methods you use depends on the nature of the problem. For example, one method is the 5 W’s and 2 H’s:

  • Who is affected?
  • What is happening?
  • When does it occur?
  • Where does it happen?
  • Why is it happening (initial understanding)?
  • How is it happening?
  • How many are affected?

For humanity-affecting problems, I think it’s very important to define what the context of the problem is.

3. Contain the Problem

Containment

Some problems are urgent, and a stopgap must be put in place while the problem is being analyzed. This is particularly relevant for problems such as product defects which affect customers.

Some brainstorming questions are:

  • Can anything be done to mitigate the negative impact (if any) that is happening?
  • Who would have to be involved with that mitigation?
  • How will the team know that the containment action worked?

Before deploying an interim expedient, the team should have asked and answered these questions (they essentially define the containment action):

  • Who will do it?
  • What is the task?
  • When will it be accomplished?

A canonical example: You have a leaky roof (the problem). The containment action is to put a pail underneath the hole to capture the leaking water. This is a temporary fix until the roof is properly repaired, and mitigates damage to the floor.

Don’t let the bucket of water example fool you—containment can be massive, e.g. corporate bailouts. Of course, the team must choose carefully: Is the cost of containment worth it?

4. Root Cause Analysis

There can be many layers of causation

Whenever you think you have an answer to a problem, as yourself: Have you gone deep enough? Or is there another layer below? If you implementt a fix, will the problem grow back?

Generally in the real world events are causal. The point of root cause analysis is to trace the causes all the way back for your problem. If you don’t find the origin of the causes, then the problem will probably rear its ugly head again.

Root cause analysis is one of the most overlooked, yet important, steps of problem solving. Even engineers often lose their way when solving a problem and jump right into a fix which later on turned out to be a red herring.

Typically, driving to root cause follows one of these two routes:

  1. Start with data; develop theories from that data.
  2. Start with a theory; search for data to support or refute it.

Either way, team members must always remember keep in mind that correlation is not necessarily causation.

One tool to use is the 5 Why’s, in which you move down the “ladder of abstraction” by continually asking: “why?” Start with a cause and ask why this cause is responsible for the gap (or error). Then ask again until you’ve bottomed out with something that may be a true root cause.

There are many other general purpose methods and tools to assist in this stage; I will list some of them here, but please look them up for detailed explanations:

  • Brainstorming: Generate as many ideas as possible, and elaborate on the best ideas.
  • Process flow analysis: Flowchart a process; attempt to narrow down what element in the flow chart is causing the problem.
  • Fishikawa: Use a Fishikawa (aka Cause and Effect) diagram to try narrowing down the cause(s).
  • Pareto analysis: Generate a Pareto chart, which may indicate which cause (of many) should be fixed first.
  • Data analysis: Use trend charts, scatter plots, etc. to assist in finding correlations and trends.

And that is just the beginning—a problem may need a specific new experiment or data collection method devised.

Ideally you would have a single root cause, but that is not always the case.

The team should also come up with various correction actions that solve the root cause, to be selected and refined in the next step.

5. Choose the Permanent Solution

The solution must be one or more corrective actions that solve the cause(s) of the problem. Corrective action selection is additionally guided by criteria such as time constraints, money constraints, efficiency, etc.

This is a great time to simulate/test the solution, if possible. There might be unaccounted for side effects either in the system you fixed or in related systems. This is especially true for some of the major issues that transhumanists wish to tackle.

You must verify that the corrective action(s) will in fact fix the root cause and not cause bad side effects.

6. Implement the Solution and Verify It

This is the stage when the team actually sets into motion the correction action(s). But doing it isn’t enough—the team also has to check to see if the solution is really working.

For some issues the verification is clean-cut. Some corrective actions have to be evaluated with effectiveness, for instance some benchmark. Depending on the time scale of the corrective action, the team might need to add various monitors and/or controls to continually make sure the root cause is squashed.

7. Prevent Recurrence

It’s possible that a process will revert back to its old ways after the problem has been solved, resulting in the same type of problem happening again. So the team should provide the organization or environment with improvements to processes, procedures, practices, etc. so that this type of problem does not resurface.

8. Congratulate the Team

Party time! The team should share and publicize the knowledge gained from the process as it will help future efforts and teams.

Image credits:
1. Inception (2010), Warner Bros.
2. Peter Galvin
3. Tom Parnell
4. shalawesome