Toggle light / dark theme

Supermanagement! by Mr. Andres Agostini (Excerpt)

DEEPEST

“…What distinguishes our age from every other is not the world-flattening impact of communications, not the economic ascendance of China and India, not the degradation of our climate, and not the resurgence of ancient religious animosities. Rather, it is a frantically accelerating pace of change…”

Read the entire piece at http://lnkd.in/bYP2nDC

(Excerpt)

Beyond the managerial challenges (downside risks) presented by the exponential technologies as it is understood in the Technological Singularity and its inherent futuristic forces impacting the present and the future now, there are also some grave global risks that many forms of management have to tackle with immediately.

These grave global risks have nothing to do with advanced science or technology. Many of these hazards stem from nature and some are, as well, man made.

For instance, these grave global risks ─ embodying the Disruptional Singularity ─ are geological, climatological, political, geopolitical, demographic, social, economic, financial, legal and environmental, among others. The Disruptional Singularity’s major risks are gravely threatening us right now, not later.

Read the full document at http://lnkd.in/bYP2nDC

Futurewise Success Tenets

“Futurewise Success Tenets” here is an excerpt from, “The Future of Scientific Management, Today”. To read the entire piece, just click the link at the end of article. As follows:

(1) Picture mentally, radiantly. (2) Draw outside the canvas. (3) Color outside the vectors. (4) Sketch sinuously. (5) Far-sight beyond the mind’s intangible exoskeleton. (6) Abduct indiscernible falsifiable convictions. (7) Reverse-engineering a gene and a bacterium or, better yet, the lucrative genome. (8) Guillotine the over-weighted status quo. (9) Learn how to add up ─ in your own brainy mind ─ colors, dimensions, aromas, encryptions, enigmas, phenomena, geometrical and amorphous in-motion shapes, methods, techniques, codes, written lines, symbols, contexts, locus, venues, semantic terms, magnitudes, longitudes, processes, tweets, “…knowledge-laden…” hunches and omniscient bliss, so forth. (10) Project your wisdom’s wealth onto communities of timeless-connected wikis. (11) Cryogenize the infamous illiterate by own choice and reincarnate ASAP (multiverse teleporting out of a warped / wormed passage) Da Vinci, Bacon, Newton, Goethe, Bonaparte, Edison, Franklyn, Churchill, Einstein, and Feynman. (12) Organize relationships into voluntary associations that are mutually beneficial and accountable for contributing productively to the surrounding community. (13) Practice the central rule of good strategy, which is to know and remain true to your core business and invest for leadership and R&D+Innovation. (14) Kaisen, SixSigma, Lean, LeanSigma, “…Reliability Engineer…” (the latter as solely conceived and developed by Procter & Gamble and Los Alamos National Laboratories) it all unthinkably and thoroughly by recombinant, a là Einstein Gedanke-motorized judgment (that is to say: Einsteinian Gedanke [“…thought experiments…”]. (15) Provide a road-map / blueprint for drastically compressing (‘crashing’) the time’s ‘reticules’ it will take you to get on the top of your tenure, nonetheless of your organizational level. (16) With the required knowledge and relationships embedded in organizations, create support for, and carry out transformational initiatives. (17) Offer a tested pathway for addressing the linked challenges of personal transition and organizational transformation that confront leaders in the first few months in a new tenure. (18) Foster momentum by creating virtuous cycles that build credibility and by avoiding getting caught in vicious cycles that harm credibility. (19) Institute coalitions that translate into swifter organizational adjustments to the inevitable streams of change in personnel and environment. (20) Mobilize and align the overriding energy of many others in your organization, knowing that the “…wisdom of crowds…” is upfront and outright rubbish. (21) Step outside the boundaries of the framework’s system when seeking a problem’s solution. (22) Within zillion tiny bets, raise the ante and capture the documented learning through frenzy execution. (23) “…Moonshine…” and “…Skunks-work…” and “…Re-Imagineering…” all, holding in your mind the motion-picture image that, regardless of the relevance of “…inputs…” and “…outputs,…”, entails that the highest relevance is within the sophistication within the THROUGHPUT.….. (69) Figure out exactly which neurons to make synapses with. (70) Wire up synapses the soonest…”

Read the full material at http://lnkd.in/bYP2nDC

Regards,

Mr. Andres Agostini
www.linkedin.com/in/AndresAgostini

Leadership at the next level

By Kenneth Mikkelsen, Mannaz

Effective leaders must first learn the skill of leading themselves in order to cultivate their competencies for leading others.

Have you let your eyes wander across the management section in a bookstore or an airport newsstand recently? Chances are that your attention has been drawn to the colourful variety of easily digestible how-to-become-a-better-manager books.

In North America, books with exotic titles, such as “One Minute Manager”, “Moses CEO” and “Make It So: Management Lessons from Star Trek the Next Generation”, bring in an astronomical revenue of USD 2.4 billion every year. Most of the “voodoo” management books emphasize that you must change yourself if you want a richer and fuller life – both socially and financially.

Make no mistake

It would be easy to write off the author of books, such as “Managing Your Self” by Dr. Jagdish Parikh, as being in the same category. But, make no mistake. Dr. Parikh a professor, businessman and an author himself, has a profound knowledge of management gathered from business environments all over the world. He even found the time to co-produce the Oscar-winning movie, “Ghandi”.

“Hundreds of books and models purport to suggest the best way to become a leader. Yet many people, asked to name a leader they would consider a role model, struggle to identify even one or two individuals,” Dr. Parikh points out.

According to him, the gap between what we learn about leadership and what we actually implement exposes a fundamental flaw in most of the leadership models today. These models focus mainly on competencies required for leading an organization, but do not explain how to cultivate those core competencies. Therefore we face, in a sense, a crisis of leadership.

Conflicting values

One of Dr. Jagdish Parikh’s favorite stories is about his first day as an MBA student at Harvard Business School. Born in India, he was brought up with the belief that he had to do his utmost, whatever tasks, objectives or goals he set for himself. But, as far as the results concerned, he learned to accept them with equanimity, for such results depended on a variety of external factors and variables, over which no one could have full control. At Harvard it was a different story. During the welcoming address the dean made it clear that the MBA program was designed to ensure that there would always be more work to be done every day than the time and energy at one’s disposal.

“We were told not to feel satisfied or content with whatever we achieved, because in the moment we did so, our progress would stop along with our drive for achieving more,” says Dr. Parikh.

The message that came across to Dr. Parikh was that stress is beautiful. And if he were to progress in life, he would continue to remain dissatisfied. Going from A to B meant that C should be the next focal point, without spending time being happy about reaching B.

Cultivating consciousness

Having finished his MBA, Jagdish Parikh went back to Bombay and became successful as a businessman practicing the tenets from Harvard. However, he began to suffer negative physiological and psychological symptoms of stress after just a few years.

“I seriously began to wonder if there was another way to be successful while also remaining satisfied and happy at the same time. After deep reflection and a PhD, I discovered that the missing link between success and happiness was a lack of awareness of one’s inner dynamics,” says Dr. Parikh.

Therein lies the philosophy of Dr. Jagdish Parikh. He believes that one of the major challenges that face leaders today is to cultivate their own consciousness in a hectic business environment that doesn’t leave much time for reflection and self-discovery. However, competencies for leading others take time to grow and flourish.

“Unless one knows how to lead one’s self, it would be presumptuous for anyone to be able to lead others effectively. And, if you don’t lead your self, someone else will. The essence of leadership is to effectively manage relationships with people, events, and ideas. You can’t lead something you yourself identify with. The paradox is that detachment not withdrawal, escape, or indifference coupled with involvement not addiction – in other words, detached involvement – enables mastery. Leadership then happens to you,” Dr. Parikh underlines.

Eastern wisdom meets western science

From earlier orientations towards profit and power, up to a more recent focus on people, we are now seeing business leaders that seek alignment with global and ecological concerns. According to Dr. Parikh, this means that there is a growing interest in creating an organizational culture based on support systems, networks and shared values, rather than on power, money and personal ambition – an interest in changing outlooks through deeper insights.

“The role of management is to create within the organization a climate, a culture, and a context in which corporate enrichment and individual fulfillment collaborate and resonate progressively in the development of a creative and integrative global community,” says Dr. Parikh.

According to Dr. Parikh, leaders should have a clear stand on the fundamental issues that are facing us today, i.e. balancing “how to make a living” with “how to live” – sort of building a bridge between Western management and Eastern philosophical traditions.

“As individuals we may pursue money, power and prestige – the symbols of success – in order to be happy. But despite getting more of these we do not feel proportionately happier. After all, we’re described as human beings not human havings or even human doings. Essentially we are going up the ladder but we also have to ensure that the ladder is against the right wall. This is where a combination of Western science and Eastern wisdom would ensure a more holistic approach to leadership – and life,” says Dr. Jagdish Parikh.

3j0evbm2zqijaw_small

Originally posted via The Advanced Apes

Through my writings I have tried to communicate ideas related to how unique our intelligence is and how it is continuing to evolve. Intelligence is the most bizarre of biological adaptations. It appears to be an adaptation of infinite reach. Whereas organisms can only be so fast and efficient when it comes to running, swimming, flying, or any other evolved skill; it appears as though the same finite limits are not applicable to intelligence.

What does this mean for our lives in the 21st century?

First, we must be prepared to accept that the 21st century will not be anything like the 20th. All too often I encounter people who extrapolate expected change for the 21st century that mirrors the pace of change humanity experienced in the 20th. This will simply not be the case. Just as cosmologists are well aware of the bizarre increased acceleration of the expansion of the universe; so evolutionary theorists are well aware of the increased pace of techno-cultural change. This acceleration shows no signs of slowing down; and few models that incorporate technological evolution predict that it will.

The result of this increased pace of change will likely not just be quantitative. The change will be qualitative as well. This means that communication and transportation capabilities will not just become faster. They will become meaningfully different in a way that would be difficult for contemporary humans to understand. And it is in the strange world of qualitative evolutionary change that I will focus on two major processes currently predicted to occur by most futurists.

Qualitative evolutionary change produces interesting differences in experience. Often times this change is referred to as a “metasystem transition”. A metasystem transition occurs when a group of subsystems coordinate their goals and intents in order to solve more problems than the constituent systems. There have been a few notable metasystem transitions in the history of biological evolution:

  • Transition from non-life to life
  • Transition from single-celled life to multi-celled life
  • Transition from decentralized nervous system to centralized brains
  • Transition from communication to complex language and self-awareness

All these transitions share the characteristics described of subsystems coordinating to form a larger system that solve more problems than they could do individually. All transitions increased the rate of change in the universe (i.e., reduction of entropy production). The qualitative nature of the change is important to understand, and may best be explored through a thought experiment.

Imagine you are a single-celled organism on the early Earth. You exist within a planetary network of single-celled life of considerable variety, all adapted to different primordial chemical niches. This has been the nature of the planet for well over 2 billion years. Then, some single-cells start to accumulate in denser and denser agglomerations. One of the cells comes up to you and says:

I think we are merging together. I think the remainder of our days will be spent in some larger system that we can’t really conceive. We will each become adapted for a different specific purpose to aid the new higher collective.

Surely that cell would be seen as deranged. Yet, as the agglomerations of single-cells became denser, formerly autonomous individual cells start to rely more and more on each other to exploit previously unattainable resources. As the process accelerates this integrated network forms something novel, and more complex than had previously ever existed: the first multicellular organisms.

The difference between living as an autonomous single-cell is not just quantitative (i.e., being able to exploit more resources) but also qualitative (i.e., shift from complete autonomy to being one small part of an integrated whole). Such a shift is difficult to conceive of before it actually becomes a new normative layer of complexity within the universe.

Another example of such a transition that may require less imagination is the transition to complex language and self-awareness. Language is certainly the most important phenomena that separates our species from the rest of the biosphere. It allows us to engage in a new evolution, technocultural evolution, which is essentially a new normative layer of complexity in the universe as well. For this transition, the qualitative leap is also important to understand. If you were an australopithecine, your mode of communication would not necessarily be that much more efficient than that of any modern day great ape. Like all other organisms, your mind would be essentially isolated. Your deepest thoughts, feelings, and emotions could not fully be expressed and understood by other minds within your species. Furthermore, an entire range of thought would be completely unimaginable to you. Anything abstract would not be communicable. You could communicate that you were hungry; but you could not communicate about what you thought of particular foods (for example). Language changed all that; it unleashed a new thought frontier. Not only was it now possible to exchange ideas at a faster rate, but the range of ideas that could be thought of, also increased.

And so after that digression we come to the main point: the metasystem transition of the 21st century. What will it be? There are two dominant, non-mutually exclusive, frameworks for imagining this transition: technological singularity and the global brain.

The technological singularity is essentially a point in time when the actual agent of techno-cultural change; itself changes. At the moment the modern human mind is the agent of change. But artificial intelligence is likely to emerge this century. And building a truly artificial intelligence may be the last machine we (i.e., biological humans) invent.

The second framework is the global brain. The global brain is the idea that a collective planetary intelligence is emerging from the Internet, created by increasingly dense information pathways. This would essentially give the Earth an actual sensing centralized nervous system, and its evolution would mirror, in a sense, the evolution of the brain in organisms, and the development of higher-level consciousness in modern humans.

In a sense, both processes could be seen as the phenomena that will continue to enable trends identified by global brain theorist Francis Heylighen:

The flows of matter, energy, and information that circulate across the globe become ever larger, faster and broader in reach, thanks to increasingly powerful technologies for transport and communication, which open up ever-larger markets and forums for the exchange of goods and services.

Some view the technological singularity and global brain as competing futurist hypotheses. However, I see them as deeply symbiotic phenomena. If the metaphor of a global brain is apt, at the moment the internet forms a type of primitive and passive intelligence. However, as the internet starts to form an ever greater role in human life, and as all human minds gravitate towards communicating and interacting in this medium, the internet should start to become an intelligent mediator of human interaction. Heylighen explains how this should be achieved:

the intelligent web draws on the experience and knowledge of its users collectively, as externalized in the “trace” of preferences that they leave on the paths they have traveled.

This is essentially how the brain organizes itself, by recognizing the shapes, emotions, and movements of individual neurons, and then connecting them to communicate a “global picture”, or an individual consciousness.

The technological singularity naturally fits within this evolution. The biological human brain can only connect so deeply with the Internet. We must externalize our experience with the Internet in (increasingly small) devices like laptops, smart phones, etc. However, artificial intelligence and biological intelligence enhanced with nanotechnology could form quite a deeper connection with the Internet. Such a development could, in theory, create an all-encompassing information processing system. Our minds (largely “artificial”) would form the neurons of the system, but a decentralized order would emerge from these dynamic interactions. This would be quite analogous to the way higher-level complexity has emerged in the past.

So what does this mean for you? Well many futurists debate the likely timing of this transition, but there is currently a median convergence prediction of between 2040–2050. As we approach this era we should suspect many fundamental things about our current institutions to change profoundly. There will also be several new ethical issues that arise, including issues of individual privacy, and government and corporate control. All issues that deserve a separate post.

Fundamentally this also means that your consciousness and your nature will change considerably throughout this century. The thought my sound bizarre and even frightening, but only if you believe that human intelligence and nature are static and unchanging. The reality is that human intelligence and nature are an ever evolving process. The only difference in this transition is that you will actually be conscious of the evolution itself.

Consciousness has never experienced a metasystem transition (since the last metasystem transition was towards higher-level consciousness!). So in a sense, a post-human world can still include your consciousness. It will just be a new and different consciousness. I think it is best to think about it as the emergence of something new and more complex, as opposed to the death or end of something. For the first time, evolution will have woken up.

This essay was also published by the Institute for Ethics & Emerging Technologies and by Transhumanity under the title “Is Price Performance the Wrong Measure for a Coming Intelligence Explosion?”.

Introduction

Most thinkers speculating on the coming of an intelligence explosion (whether via Artificial-General-Intelligence or Whole-Brain-Emulation/uploading), such as Ray Kurzweil [1] and Hans Moravec [2], typically use computational price performance as the best measure for an impending intelligence explosion (e.g. Kurzweil’s measure is when enough processing power to satisfy his estimates for basic processing power required to simulate the human brain costs $1,000). However, I think a lurking assumption lies here: that it won’t be much of an explosion unless available to the average person. I present a scenario below that may indicate that the imminence of a coming intelligence-explosion is more impacted by basic processing speed – or instructions per second (ISP), regardless of cost or resource requirements per unit of computation, than it is by computational price performance. This scenario also yields some additional, counter-intuitive conclusions, such as that it may be easier (for a given amount of “effort” or funding) to implement WBE+AGI than it would be to implement AGI alone – or rather that using WBE as a mediator of an increase in the rate of progress in AGI may yield an AGI faster or more efficiently per unit of effort or funding than it would be to implement AGI directly.

Loaded Uploads:

Petascale supercomputers in existence today exceed the processing-power requirements estimated by Kurzweil, Moravec, and Storrs-Hall [3]. If a wealthy individual were uploaded onto an petascale supercomputer today, they would have the same computational resources as the average person would eventually have in 2019 according to Kurzweil’s figures, when computational processing power equal to the human brain, which he estimates at 20 quadrillion calculations per second. While we may not yet have the necessary software to emulate a full human nervous system, the bottleneck for being able to do so is progress in the field or neurobiology rather than software performance in general. What is important is that the raw processing power estimated by some has already been surpassed – and the possibility of creating an upload may not have to wait for drastic increases in computational price performance.

The rate of signal transmission in electronic computers has been estimated to be roughly 1 million times as fast as the signal transmission speed between neurons, which is limited to the rate of passive chemical diffusion. Since the rate of signal transmission equates with subjective perception of time, an upload would presumably experience the passing of time one million times faster than biological humans. If Yudkowsky’s observation [4] that this would be the equivalent to experiencing all of history since Socrates every 18 “real-time” hours is correct then such an emulation would experience 250 subjective years for every hour and 4 years a minute. A day would be equal to 6,000 years, a week would be equal to 1,750 years, and a month would be 75,000 years.

Moreover, these figures use the signal transmission speed of current, electronic paradigms of computation only, and thus the projected increase in signal-transmission speed brought about through the use of alternative computational paradigms, such as 3-dimensional and/or molecular circuitry or Drexler’s nanoscale rod-logic [5], can only be expected to increase such estimates of “subjective speed-up”.

The claim that the subjective perception of time and the “speed of thought” is a function of the signal-transmission speed of the medium or substrate instantiating such thought or facilitating such perception-of-time follows from the scientific-materialist (a.k.a. metaphysical-naturalist) claim that the mind is instantiated by the physical operations of the brain. Thought and perception of time (or the rate at which anything is perceived really) are experiential modalities that constitute a portion of the brain’s cumulative functional modalities. If the functional modalities of the brain are instantiated by the physical operations of the brain, then it follows that increasing the rate at which such physical operations occur would facilitate a corresponding increase in the rate at which such functional modalities would occur, and thus the rate at which the experiential modalities that form a subset of those functional modalities would likewise occur.

Petascale supercomputers have surpassed the rough estimates made by Kurzweil (20 petaflops, or 20 quadrillion calculations per second), Moravec (100,000 MIPS), and others. Most argue that we still need to wait for software improvements to catch up with hardware improvements. Others argue that even if we don’t understand how the operation of the brain’s individual components (e.g. neurons, neural clusters, etc.) converge to create the emergent phenomenon of mind – or even how such components converge so as to create the basic functional modalities of the brain that have nothing to do with subjective experience – we would still be able to create a viable upload. Nick Bostrom & Anders Sandberg, in their 2008 Whole Brain Emulation Roadmap [6] for instance, have argued that if we understand the operational dynamics of the brain’s low-level components, we can then computationally emulate such components and the emergent functional modalities of the brain and the experiential modalities of the mind will emerge therefrom.

Mind Uploading is (Largely) Independent of Software Performance:

Why is this important? Because if we don’t have to understand how the separate functions and operations of the brain’s low-level components converge so as to instantiate the higher-level functions and faculties of brain and mind, then we don’t need to wait for software improvements (or progress in methodological implementation) to catch up with hardware improvements. Note that for the purposes of this essay “software performance” will denote the efficacy of the “methodological implementation” of an AGI or Upload (i.e. designing the mind-in-question, regardless of hardware or “technological implementation” concerns) rather than how optimally software achieves its effect(s) for a given amount of available computational resources.

This means that if the estimates for sufficient processing power to emulate the human brain noted above are correct then a wealthy individual could hypothetically have himself destructively uploaded and run on contemporary petascale computers today, provided that we can simulate the operation of the brain at a small-enough scale (which is easier than simulating components at higher scales; simulating the accurate operation of a single neuron is less complex than simulating the accurate operation of higher-level neural networks or regions). While we may not be able to do so today due to lack of sufficient understanding of the operational dynamics of the brain’s low-level components (and whether the models we currently have are sufficient is an open question), we need wait only for insights from neurobiology, and not for drastic improvements in hardware (if the above estimates for required processing-power are correct), or in software/methodological-implementation.

If emulating the low-level components of the brain (e.g. neurons) will give rise to the emergent mind instantiated thereby, then we don’t actually need to know “how to build a mind” – whereas we do in the case of an AGI (which for the purposes of this essay shall denote AGI not based off of the human or mammalian nervous system, even though an upload might qualify as an AGI according to many people’s definitions). This follows naturally from the conjunction of the premises that 1. the system we wish to emulate already exists and 2. we can create (i.e. computationally emulate) the functional modalities of the whole system by only understanding the operation of the low level-level components’ functional modalities.

Thus, I argue that a wealthy upload who did this could conceivably accelerate the coming of an intelligence explosion by such a large degree that it could occur before computational price performance drops to a point where the basic processing power required for such an emulation is available for a widely-affordable price, say for $1,000 as in Kurzweil’s figures.

Such a scenario could make basic processing power, or Instructions-Per-Second, more indicative of an imminent intelligence explosion or hard take-off scenario than computational price performance.

If we can achieve human whole-brain-emulation even one week before we can achieve AGI (the cognitive architecture of which is not based off of the biological human nervous system) and this upload set to work on creating an AGI, then such an upload would have, according to the “subjective-speed-up” factors given above, 1,750 subjective years within which to succeed in designing and implementing an AGI, for every one real-time week normatively-biological AGI workers have to succeed.

The subjective-perception-of-time speed-up alone would be enough to greatly improve his/her ability to accelerate the coming of an intelligence explosion. Other features, like increased ease-of-self-modification and the ability to make as many copies of himself as he has processing power to allocate to, only increase his potential to accelerate the coming of an intelligence explosion.

This is not to say that we can run an emulation without any software at all. Of course we need software – but we may not need drastic improvements in software, or a reinventing of the wheel in software design

So why should we be able to simulate the human brain without understanding its operational dynamics in exhaustive detail? Are there any other processes or systems amenable to this circumstance, or is the brain unique in this regard?

There is a simple reason for why this claim seems intuitively doubtful. One would expect that we must understand the underlying principles of a given technology’s operation in in order to implement and maintain it. This is, after all, the case for all other technologies throughout the history of humanity. But the human brain is categorically different in this regard because it already exists.

If, for instance, we found a technology and wished to recreate it, we could do so by copying the arrangement of components. But in order to make any changes to it, or any variations on its basic structure or principals-of-operation, we would need to know how to build it, maintain it, and predictively model it with a fair amount of accuracy. In order to make any new changes, we need to know how such changes will affect the operation of the other components – and this requires being able to predictively model the system. If we don’t understand how changes will impact the rest of the system, then we have no reliable means of implementing any changes.

Thus, if we seek only to copy the brain, and not to modify or augment it in any substantial way, the it is wholly unique in the fact that we don’t need to reverse engineer it’s higher-level operations in order to instantiate it.

This approach should be considered a category separate from reverse-engineering. It would indeed involve a form of reverse-engineering on the scale we seek to simulate (e.g. neurons or neural clusters), but it lacks many features of reverse-engineering by virtue of the fact that we don’t need to understand its operation on all scales. For instance, knowing the operational dynamics of the atoms composing a larger system (e.g. any mechanical system) wouldn’t necessarily translate into knowledge of the operational dynamics of its higher-scale components. The approach mind-uploading falls under, where reverse-engineering at a small enough scale is sufficient to recreate it, provided that we don’t seek to modify its internal operation in any significant way, I will call Blind Replication.

Blind replication disallows any sort of significant modifications, because if one doesn’t understand how processes affect other processes within the system then they have no way of knowing how modifications will change other processes and thus the emergent function(s) of the system. We wouldn’t have a way to translate functional/optimization objectives into changes made to the system that would facilitate them. There are also liability issues, in that one wouldn’t know how the system would work in different circumstances, and would have no guarantee of such systems’ safety or their vicarious consequences. So government couldn’t be sure of the reliability of systems made via Blind Replication, and corporations would have no way of optimizing such systems so as to increase a given performance metric in an effort to increase profits, and indeed would be unable to obtain intellectual property rights over a technology that they cannot describe the inner-workings or “operational dynamics” of.

However, government and private industry wouldn’t be motivated by such factors (that is, ability to optimize certain performance measures, or to ascertain liability) in the first place, if they were to attempt something like this – since they wouldn’t be selling it. The only reason I foresee government or industry being interested in attempting this is if a foreign nation or competitor, respectively, initiated such a project, in which case they might attempt it simply to stay competitive in the case of industry and on equal militaristic defensive/offensive footing in the case of government. But the fact that optimization-of-performance-measures and clear liabilities don’t apply to Blind Replication means that a wealthy individual would be more likely to attempt this, because government and industry have much more to lose in terms of liability, were someone to find out.

Could Upload+AGI be easier to implement than AGI alone?

This means that the creation of an intelligence with a subjective perception of time significantly greater than unmodified humans (what might be called Ultra-Fast Intelligence) may be more likely to occur via an upload, rather than an AGI, because the creation of an AGI is largely determined by increases in both computational processing and software performance/capability, whereas the creation of an upload may be determined by-and-large by processing-power and thus remain largely independent of the need for significant improvements in software performance or “methodological implementation”

If the premise that such an upload could significantly accelerate a coming intelligence explosion (whether by using his/her comparative advantages to recursively self-modify his/herself, to accelerate innovation and R&D in computational hardware and/or software, or to create a recursively-self-improving AGI) is taken as true, it follows that even the coming of an AGI-mediated intelligence explosion specifically, despite being impacted by software improvements as well as computational processing power, may be more impacted by basic processing power (e.g. IPS) than by computational price performance — and may be more determined by computational processing power than by processing power + software improvements. This is only because uploading is likely to be largely independent of increases in software (i.e. methodological as opposed to technological) performance. Moreover, development in AGI may proceed faster via the vicarious method outlined here – namely having an upload or team of uploads work on the software and/or hardware improvements that AGI relies on – than by directly working on such improvements in “real-time” physicality.

Virtual Advantage:

The increase in subjective perception of time alone (if Yudkowsky’s estimate is correct, a ratio of 250 subjective years for every “real-time” hour) gives him/her a massive advantage. It also would likely allow them to counter-act and negate any attempts made from “real-time” physicality to stop, slow or otherwise deter them.

There is another feature of virtual embodiment that could increase the upload’s ability to accelerate such developments. Neural modification, with which he could optimize his current functional modalities (e.g. what we coarsely call “intelligence”) or increase the metrics underlying them, thus amplifying his existing skills and cognitive faculties (as in Intelligence Amplification or IA), as well as creating categorically new functional modalities, is much easier from within virtual embodiment than it would be in physicality. In virtual embodiment, all such modifications become a methodological, rather than technological, problem. To enact such changes in a physically-embodied nervous system would require designing a system to implement those changes, and actually implementing them according to plan. To enact such changes in a virtually-embodied nervous system requires only a re-organization or re-writing of information. Moreover, in virtual embodiment, any changes could be made, and reversed, whereas in physical embodiment reversing such changes would require, again, designing a method and system of implementing such “reversal-changes” in physicality (thereby necessitating a whole host of other technologies and methodologies) – and if those changes made further unexpected changes, and we can’t easily reverse them, then we may create an infinite regress of changes, wherein changes made to reverse a given modification in turn creates more changes, that in turn need to be reversed, ad infinitum.

Thus self-modification (and especially recursive self-modification), towards the purpose of intelligence amplification into Ultraintelligence [7] in easier (i.e. necessitating a smaller technological and methodological infrastructure – that is, the required host of methods and technologies needed by something – and thus less cost as well) in virtual embodiment than in physical embodiment.

These recursive modifications not only further maximize the upload’s ability to think of ways to accelerate the coming of an intelligence explosion, but also maximize his ability to further self-modify towards that very objective (thus creating the positive feedback loop critical for I.J Good’s intelligence explosion hypothesis) – or in other words maximize his ability to maximize his general ability in anything.

But to what extent is the ability to self-modify hampered by the critical feature of Blind Replication mentioned above – namely, the inability to modify and optimize various performance measures by virtue of the fact that we can’t predictively model the operational dynamics of the system-in-question? Well, an upload could copy himself, enact any modifications, and see the results – or indeed, make a copy to perform this change-and-check procedure. If the inability to predictively model a system made through the “Blind Replication” method does indeed problematize the upload’s ability to self-modify, it would still be much easier to work towards being able to predictively model it, via this iterative change-and-check method, due to both the subjective-perception-of-time speedup and the ability to make copies of himself.

It is worth noting that it might be possible to predictively model (and thus make reliable or stable changes to) the operation of neurons, without being able to model how this scales up to the operational dynamics of the higher-level neural regions. Thus modifying, increasing or optimizing existing functional modalities (i.e. increasing synaptic density in neurons, or increasing the range of usable neurotransmitters — thus increasing the potential information density in a given signal or synaptic-transmission) may be significantly easier than creating categorically new functional modalities.

Increasing the Imminence of an Intelligent Explosion:

So what ways could the upload use his/her new advantages and abilities to actually accelerate the coming of an intelligence explosion? He could apply his abilities to self-modification, or to the creation of a Seed-AI (or more technically a recursively self-modifying AI).

He could also accelerate its imminence vicariously by working on accelerating the foundational technologies and methodologies (or in other words the technological and methodological infrastructure of an intelligence explosion) that largely determine its imminence. He could apply his new abilities and advantages to designing better computational paradigms, new methodologies within existing paradigms (e.g. non-Von-Neumann architectures still within the paradigm of electrical computation), or to differential technological development in “real-time” physicality towards such aims – e.g. finding an innovative means of allocating assets and resources (i.e. capital) to R&D for new computational paradigms, or optimizing current computational paradigms.

Thus there are numerous methods of indirectly increasing the imminence (or the likelihood of imminence within a certain time-range, which is a measure with less ambiguity) of a coming intelligence explosion – and many new ones no doubt that will be realized only once such an upload acquires such advantages and abilities.

Intimations of Implications:

So… Is this good news or bad news? Like much else in this increasingly future-dominated age, the consequences of this scenario remain morally ambiguous. It could be both bad and good news. But the answer to this question is independent of the premises – that is, two can agree on the viability of the premises and reasoning of the scenario, while drawing opposite conclusions in terms of whether it is good or bad news.

People who subscribe to the “Friendly AI” camp of AI-related existential risk will be at once hopeful and dismayed. While it might increase their ability to create their AGI (or more technically their Coherent-Extrapolated-Volition Engine [8]), thus decreasing the chances of an “unfriendly” AI being created in the interim, they will also be dismayed by the fact that it may include (but not necessitate) a recursively-modifying intelligence, in this case an upload, to be created prior to the creation of their own AGI – which is the very problem they are trying to mitigate in the first place.

Those who, like me, see a distributed intelligence explosion (in which all intelligences are allowed to recursively self-modify at the same rate – thus preserving “power” equality, or at least mitigating “power” disparity [where power is defined as the capacity to affect change in the world or society] – and in which any intelligence increasing their capably at a faster rate than all others is disallowed) as a better method of mitigating the existential risk entailed by an intelligence explosion will also be dismayed. This scenario would allow one single person to essentially have the power to determine the fate of humanity – due to his massively increased “capability” or “power” – which is the very feature (capability disparity/inequality) that the “distributed intelligence explosion” camp of AI-related existential risk seeks to minimize.

On the other hand, those who see great potential in an intelligence explosion to help mitigate existing problems afflicting humanity – e.g. death, disease, societal instability, etc. – will be hopeful because the scenario could decrease the time it takes to implement an intelligence explosion.

I for one think that it is highly likely that the advantages proffered by accelerating the coming of an intelligence explosion fail to supersede the disadvantages incurred by the increase existential risk it would entail. That is, I think that the increase in existential risk brought about by putting so much “power” or “capability-to-affect-change” in the (hands?) one intelligence outweighs the decrease in existential risk brought about by the accelerated creation of an Existential-Risk-Mitigating A(G)I.

Conclusion:

Thus, the scenario presented above yields some interesting and counter-intuitive conclusions:

  1. How imminent an intelligence explosion is, or how likely it is to occur within a given time-frame, may be more determined by basic processing power than by computational price performance, which is a measure of basic processing power per unit of cost. This is because as soon as we have enough processing power to emulate a human nervous system, provided we have sufficient software to emulate the lower level neural components giving rise to the higher-level human mind, then the increase in the rate of thought and subjective perception of time made available to that emulation could very well allow it to design and implement an AGI before computational price performance increases by a large enough factor to make the processing power necessary for that AGI’s implementation available for a widely-affordable cost. This conclusion is independent of any specific estimates of how long the successful computational emulation of a human nervous system will take to achieve. It relies solely on the premise that the successful computational emulation of the human mind can be achieved faster than the successful implementation of an AGI whose design is not based upon the cognitive architecture of the human nervous system. I have outlined various reasons why we might expect this to be the case. This would be true even if uploading could only be achieved faster than AGI (given an equal amount of funding or “effort”) by a seemingly-negligible amount of time, like one week, due to the massive increase in speed of thought and the rate of subjective perception of time that would then be available to such an upload.
  2. The creation of an upload may be relatively independent of software performance/capability (which is not to say that we don’t need any software, because we do, but rather that we don’t need significant increases in software performance or improvements in methodological implementation – i.e. how we actually design a mind, rather than the substrate it is instantiated by – which we do need in order to implement an AGI and which we would need for WBE, were the system we seek to emulate not already in existence) and may in fact be largely determined by processing power or computational performance/capability alone, whereas AGI is dependent on increases in both computational performance and software performance or fundamental progress in methodological implementation.
    • If this second conclusion is true, it means that an upload may be possible quite soon considering the fact that we’ve passed the basic estimates for processing requirements given by Kurzweil, Moravec and Storrs-Hall, provided we can emulate the low-level neural regions of the brain with high predictive accuracy (and provided the claim that instantiating such low-level components will vicariously instantiate the emergent human mind, without out needing to really understand how such components functionally-converge to do so, proves true), whereas AGI may still have to wait for fundamental improvements to methodological implementation or “software performance”
    • Thus it may be easier to create an AGI by first creating an upload to accelerate the development of that AGI’s creation, than it would be to work on the development of an AGI directly. Upload+AGI may actually be easier to implement than AGI alone is!

franco 2 essay 5

References:

[1] Kurzweil, R, 2005. The Singularity is Near. Penguin Books.

[2] Moravec, H, 1997. When will computer hardware match the human brain?. Journal of Evolution and Technology, [Online]. 1(1). Available at: http://www.jetpress.org/volume1/moravec.htm [Accessed 01 March 2013].

[3] Hall, J (2006) “Runaway Artificial Intelligence?” Available at: http://www.kurzweilai.net/runaway-artificial-intelligence [Accessed: 01 March 2013]

[4] Adam Ford. (2011). Yudkowsky vs Hanson on the Intelligence Explosion — Jane Street Debate 2011 . [Online Video]. August 10, 2011. Available at: http://www.youtube.com/watch?v=m_R5Z4_khNw [Accessed: 01 March 2013].

[5] Drexler, K.E, (1989). MOLECULAR MANIPULATION and MOLECULAR COMPUTATION. In NanoCon Northwest regional nanotechnology conference. Seattle, Washington, February 14–17. NANOCON. 2. http://www.halcyon.com/nanojbl/NanoConProc/nanocon2.html [Accessed 01 March 2013]

[6] Sandberg, A. & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008–3. http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf [Accessed 01 March 2013]

[7] Good, I.J. (1965). Speculations Concerning the First Ultraintelligent Machine. Advances in Computers.

[8] Yudkowsky, E. (2004). Coherent Extrapolated Volition. The Singularity Institute.

AI logo small

Asia Institute Report

Proposal for a Constitution of Information
March 3, 2013
Emanuel Pastreich

Introduction

When David Petraeus resigned as CIA director afteran extramarital affair with his biographer Paula Broadwell was exposed, the problem of information security gained national attention. The public release of personal e-mails in order to impugn someone at the very heart of the American intelligence community raised awareness of e-mail privacy issues and generated a welcome debate on the need for greater safeguards. The problem of e-mail security, however, is only the tip of the iceberg of a far more serious problem involving information with which we have not started to grapple. We will face devastating existential questionsin the years ahead as human civilization enters a potentially catastrophic transformation—one driven not by the foibles of man, but rather by the exponential increase in our capability to gather, store, share, alter and fabricate information of every form, coupled with a sharp drop in the cost of doing so. Such basic issues as how we determine what is true and what is real, who controls institutions and organizations, and what has significance for us in an intellectual and spiritual sense will become increasingly problematic. The emerging challenge cannot be solved simply by updating the 1986 “Electronic Communications Privacy Act” to meet the demands of the present day;[1] it will require a rethinking of our society and culture and new, unprecedented, institutions to respond to the challenge. International Data Corporation estimated the total amount of digital information in the world to be 2.7 zettabytes (2.7 followed by 21 zeros) in 2012, a 48 percent increase from 2011—and we are just getting started.[2]

[1]As is suggested in the article by Tony Romm “David Petraeus affair scandal highlights email privacy issues” (http://www.politico.com/news/stories/1112/83984.html#ixzz2CUML3RDy).

[2] http://www.idc.com/getdoc.jsp?containerId=prUS23177411#.UTL3bDD-H54

The explosion in the amount of information circulating in the world, and the increase in the ease with which that information can be obtained or altered, will change every aspect of our lives, from education and governance to friendship and kinship, to the very nature of human experience. We need a comprehensive response to the information revolution that not only proposes innovative ways to employ new technologies in a positive manner but also addresses the serious, unprecedented, challenges that they present for us.

The ease with information of every form can now be reproduced and altered is an epistemological and ontological and a governmental challenge for us. Let us concentrate on the issue of governance here. The manipulability of information is increasing in all aspects of life, but the constitution on which we base our laws and our government has little to say about information, and nothing to say about the transformative wave sweeping through our society today as a result. Moreover, we have trouble grasping the seriousness of the information crisis because it alters the very lens through which we perceive the world. If we rely on the Internet to tell us how the world changes, for example, we are blind as to how the Internet itself is evolving and how that evolution impacts human relations. For that matter, in that our very thought patterns are molded over time by the manner in which we receive, we may come to see information that is presented in that on-line format as a more reliable source than our direct perceptions of the physical world. The information revolution has the potential to dramatically change human awareness of the world and inhibit our ability to make decisions if we are surrounded with convincing data whose reliability we cannot confirm. These challenges call out for a direct and systematic response.

There are a range of piecemeal solutions to the crisis being undertaken around the world. The changes in our world, however, are so fundamental that they call out for a systematic response.We need to hold an international constitutional convention through which we can draft a legally binding global “constitution of information” that will address the fundamental problems created by the information revolution and set down clear guidelines for how we can control the terrible cultural and institutional fluidity created by this information revolution. The process of identifying the problems born of the massive shift in the nature of information, and suggesting solutions workable will be complex, but the issue calls out for an entirely new universe of administration and jurisprudence regarding the control, use, and abuse of information. As James Baldwin once wrote, “Not everything that is faced can be changed. But nothing can be changed until it is faced.”

The changes are so extensive that they cannot be dealt with through mere extensions of the United States constitution or the existing legal code, nor can it be left to intelligence agencies, communications companies, congressional committees or international organizations that were not designed to handle the convergence of issues related to increased computational power, but end up formulating information policy by default. We must bravely set out to build a consensus in the United States, and around the world, about the basic definition of information, how information should be controlled and maintained, and what the long-term implications of the shifting nature of information will be for humanity. We should then launch a constitutional convention and draft a document that sets forth a new set of laws and responsible agencies for assessing the accuracy of information and addressing its misuse.

Those who may object to such a constitution of information as a dangerous form of centralized authority that is likely to encourage further abuse are not fully aware of the difficulty of the problems we face. The abuse of information has already reached epic proportions and we are just at the beginning of an exponential increase. There should be no misunderstanding: We are not suggesting a totalitarian “Ministry of Truth” that undermines a world of free exchange between individuals. Rather, we are proposing a system that will bring accountability, institutional order, and transparency to the institutions and companies that already engage in the control, collection, and alternation of information. Failure to establish a constitution of information will not assure preservation of an Arcadian utopia, but rather encourage the emergence of even greater fields of information collection and manipulation entirely beyond the purview of any institution. The result will be increasing manipulation of human society by dark and invisible forces for which no set of regulations has been established—that is already largely the case. The constitution of information, in whatever form it may take, is the only way to start addressing the hidden forces in our society that tug at our institutional chains.

Drafting a constitution is not merely a matter of putting pen to paper. The process requires the animation of that document in the form of living institutions with budgets and mandates. It is not my intention to spell out the full parameters of such a constitution of information and the institutions that it would support because a constitution of information can only be successful if it engages living institutions and corporations in a complex and painful process of deal making and compromises that, like the American Constitutional Convention of 1787, is guided at a higher level by certain idealistic principles. The ultimate form of such a constitution cannot be predicted in advance, and to present a version in advance here would be counterproductive. We can, however, identify some of the key challenges and the issues that would be involved in drafting such a constitution of information.

The Threats posed by the Information Revolution

The ineluctable increase of computational power in recent years has simplified the transmission, modification, creation, and destruction of massive amounts of information, rendering all information fluid, mutable, and potentially unreliable. The rate at which information can be rapidly and effectively manipulated is enhanced by an exponential rise in computers’ capacity. Following Moore’s Law, which suggests that the number of microprocessors that can be placed on a chip will double every 18 months, the capacity of computers continues to increase dramatically, whereas human institutions change only very slowly.[3] That gap between technological change and the evolution of human civilization has reached an extreme, all the more dangerous because so many people have trouble grasping the nature of the challenge and blame the abuse of information they observe on the dishonesty of individuals, or groups, rather than the technological change itself.

The cost for surveillance of electronic communications, for keeping track of the whereabouts of people and for documenting every aspect of human and non-human interaction, is dropping so rapidly that what was the exclusive domain of supercomputers at the National Security Agency a decade ago is now entirely possible for developing countries, and will soon be in the hands of individuals. In ten years, when vastly increased computational power will mean that a modified laptop computer can track billions of people with considerable resolution, and that capability is combined with autonomous drones, we will need a new legal framework to respond in a systematic manner to the use and abuse of information at all levels of our society. If we start to plan the institutions that we will need, we can avoid the greatest threat: the invisible manipulation of information without accountability.

Surveillance and gathering of massive amounts of information

As the cost of collecting information becomes inexpensive, it is becoming easier to collect and sort massive amounts of data about individuals and groups and to extract from that information relevant detail about their lives and activities. Seemingly insignificant data taken from garbage, emails, and photographs can now be easily combined and systematically analyzed to essentially give as much information about individuals as a government might obtain from wiretapping—although emerging technology makes the process easier to implement and harder to detect. Increasingly smaller devices can take photographs of people and places over time with great ease and that data can be combined and sorted so as to obtain extremely accurate descriptions of the daily lives of individuals, who they are, and what they do. Such information can be combined with other information to provide complete profiles of people that go beyond what the individuals know about themselves. As cameras are combined with mini-drones in the years to come, the range of possible surveillance will increase dramatically. Global regulations will be an absolute must for the simple reason that it will be impossible to stop this means of gathering big data.

Fabrication of information

In the not-too-distant future, it will be possible to fabricate cheaply not only texts and data, but all forms of photographs, recordings, and videos with such a level of verisimilitude that fictional artifacts indistinguishable from their historically accurate counterparts will compete for our attention. Currently, existing processing power can be combined with intermediate user-level computer skills to effectively alter information, whether still-frame images using programs like Photoshop or videos using Final Cut Pro. Digital information platforms for photographs and videos are extremely susceptible to alteration and the problem will get far worse. It will be possible for individuals to create convincing documentation, photo or video, in which any event involving any individual is vividly portrayed in an authentic manner. It will be increasingly easy for any number of factions and interest groups to make up materials to that document their perspectives, creating political and systemic chaos. Rules stipulating what is true,and what is not, will no longer be an option when we reach that point. Of course the authorization of an organization to make a call as to what information is true brings with it incredible risk of abuse. Nevertheless, although there will be great risk in enabling a group to make binding determination concerning authenticity (and there will clearly be a political element to truth as long as humans rule society) the danger posed by inaction is far worse.

When fabricated images and movies can no longer be distinguished from reality by the observer, and computers can easily continue to create new content, it will be possible to continue these fabrications over time, thereby creating convincing alternative realities with considerable mimetic depth. At that point, the ability to create convincing images and videos will merge with the next generation virtual reality technologies to further confuse the issue of what is real. We will see the emergence ofvirtual worlds that appear at least as real as the one that we inhabit. If some event becomes a consistent reality in those virtual worlds, it may be difficult, if not impossible, for people to comprehend that the event never actually “happened,” thereby opening the door for massive manipulation of politics and ultimately of history.

Once we have complex virtual realities that present a physical landscape that possesses almost as much depth as the real world and the characters have elaborate histories and memories of events over decades and form populations of millions of anatomically distinct virtual people with distinct individualities, the potential for confusion will be tremendous. It will no longer be clear what reality has authority and many political and legal will be irresolvable.

But that is only half of the problem. Thosevirtual worlds are already extending into social networks. An increasing number of people on Facebook are not actual people at all, but characters, avatars, created by third parties. As computers grow more powerful, it will be possible to create thousands, then hundreds of thousands, of individuals on social networks who have complex personal histories and personalities. These virtual people will be able toengage human partners in compelling conversations that pass the Turing Test. And, because those virtual people can write messages and Skype 24 hours a day, and customize their message to what the individual finds interesting, they can be more attractive than human “friends” and have the potential to seriously distort our very concept of society and of reality. There will be a concrete and practical need for a set of codes and laws to regulate such an environment.

The Problem of Perception

Over time, virtual reality may end up seeming much more real and convincing to people who are accustomed to it than actual reality. That issue is particularly relevant when it comes to the next generation, who will be exposed to virtual reality from infancy. Yet virtual reality is fundamentally different from the real world. For example, virtual reality is not subject to the same laws of causality. The relations between events can be altered with ease in virtual reality and epistemological assumptions from the concrete world do not hold. Virtual reality can muddle such basic concepts as responsibility and guilt, or the relationship of self and society. It will be possible in the not-too-distant future to convince people of something using faulty or irrational logic whose only basis is in virtual reality. This fact has profound implications for every aspect of law and institutional functionality.

And if falsehoods are continued in virtual reality—which seems to represent reality accurately—over time in a systematic way, interpretations of even common-sense assumptions about life and society will diverge, bringing everything into question. As virtual reality expands its influence, we will have to make sure that certain principles are upheld even in virtual space so as to assure that it does not create chaos in our very conception of the public sphere. That process, I hold, cannot be governed in the legal system that we have at present. New institutions will have to be developed.

The dangers of the production of increasingly unverifiable information are perhaps a greater threat than even terrorism. While the idea of individual elements setting off “dirty bombs” is certainly frightening, imagine a world in which the polity simply can never be sure whether anything they see/read/hear is true or not. This threat is at least as significant as surveillance operations, but has received far less attention. The time has come for us to formulatethe institutional foundation that will define and maintain firm parameters for the use, alteration and retention of information on a global scale.

Money

We live in a money economy, but the information revolution is altering the nature of money itself right before our eyes. Money has gone from an analog system within which it was once was restricted to the amount of gold an individual possessed to a digital system in which the only limitation on the amount of money represented in computers is the tolerance for risk on the part of the players involved and the ability of national and international institutions to monitor. In any case, the mechanisms are now in place to alter the amount of currency, or for that matter of many other items such as commodities or stocks, without any effective global oversight. The value of money and the quantity in circulation can be altered with increasing ease, and current safeguards are clearly insufficient. The problem will grow worse as computational power, and the number of players who can engage in complex manipulations of money, increase.

Drones and Robots

Then there is the explosion of the field of drones and robots, devices of increasingly small size that can conduct detailed surveillance and which increasingly are capable of military action and other forms of interference in human society. Whereas the United States had no armed drones and no robots when it entered Afghanistan, it has now more than 8,000 drones in the air and more than 12,000 robots on the ground.[4] The number of drones and robots will continue to increase rapidly and they are increasingly being used in the United States and around the world without regard for borders.

As the technology becomes cheaper, we will see an increasing number of tiny drones and robots that can operate outside of any legal framework. They will be used to collect information, but they can also be hacked and serve as portals for the distortion and manipulation of information at every level. Moreover, drones and robots have the potential to carry out acts of destruction and other criminal activities whose source can be hidden because of ambiguities as to control and agency. For this reason, the rapidly emerging world of drones and robots deserves to be treated at great length within the constitution of information.

Drafting the Constitution of Information

The constitution of information could become an internationally recognized, legally binding, document that lays down rules for maintaining the accuracy of information and protecting it from abuse. It could also set down the parameters for institutions charged with maintaining long-term records of accurate information against which other data can be checked, thereby serving as the equivalent of an atomic clock for exact reference in an age of considerable confusion. The ability to certify the integrity of information is an issue an order of magnitude more serious than the intellectual property issues on which most international lawyers focus today, and deserves to be identified as an entire field in itself—with a constitution of its own that serves as the basis for all future debate and argument.

This challenge of drafting a constitution of information requires a new approach and a bottom-up design in order to be capable of sufficiently addressing the gamut of complex, interconnected issues found in transnational spaces like that in which digital information exists. The governance systems for information are simply not sufficient, and overhauling them to make them meet the standards necessary would be much more work and much less effective than designing and implementing an entirely new, functional system, which the constitution of information represents. Moreover, the rate of technological change will require a system that can be updated and made relevant while at the same time safeguarding against it being captured by vested interests or made irrelevant.

A possible model for the constitution of information can be found in the “Freedom of Information” section of the new Icelandic constitution drafted in 2011. The Constitutional Council engaged in a broad debate with citizens and organizations throughout the country about the content of the new constitution. The constitution described in detail mechanisms required for government transparency and public accessibility that are far more aligned with the demands of today than other similar documents.[5]

It would be meaningless, however, to merely put forth a model international “constitution of information” without the process of drafting it because without the buy-in of institutions and individuals in its formulation, the constitution would not have the authority necessary to function. The process of debating and compromising that determines the contours of that constitution would endow it with social and political significance, and, like the constitution of 1787, it would become the core for governance. For that matter, the degree to which the content of the constitution of information would be legally enforceable would have to be part of the discussion held at the convention.

Process for the Constitutional Convention

To respond to this global challenge, we should call a constitutional convention in which we will put forth a series of basic principles and enforceable regulations that are agreed upon by major institutions responsible for policy—including national governments and supra-national organizations and multi-national corporations, research institutions, intelligence agencies, NGOs, and a variety of representatives from other organizations. Deciding who to invite and how will be difficult, but it should not be a stumbling block. The United States Constitution has proven quite effective over the last few centuries even though it was drafted by a group that was not representative of the population of North America at the time. Although democratic process is essential to good government, there are moments in history in which we confront deeper ontological and epistemological questions that cannot be addressed by elections or referendums and require a select group of individuals like Benjamin Franklin, Thomas Jefferson and Alexander Hamilton. At the same time, the constitutional convention cannot be merely a gathering of wise men, but will have to involve those directly engaged in the information economy and information policy.

That process of drafting a constitution will involve the definition of key concepts, the establishment of the legal and social limits of the constitution’s authority, the formulation of a system for evaluating the use and misuse of information and suggestions as to policies for responding to abuses of information on a global scale. The text of this constitution of information should be carefully drafted with a literary sense of language so that it will outlive the specifics of the moment and with a clear historic vision and unmistakable idealism that will inspire future generations as the United States Constitution inspires us. This constitution cannot be a flat and bureaucratic rehashing of existing policies on privacy and security.

We must be aware of the dangers involved in trying to determine what is and is not reliable information as draft the constitution of information. It is essential to set up a workable system for assuring the integrity of information, but multiple safeguards, checks, and balances will be necessary. There should be no assumptions as to what the constitution of information would ultimately be, but only the requirement that it should be binding and that the process of drafting it should be cautious but honest.

One essential assumption should be, following David Brin’s argument in his book The Transparent Society,[6] that privacy will be extremely difficult, if not impossible, to protect in the current environment. We must accept, paradoxically, that much information must be made “public” in some sense in order to preserve its integrity and its privacy. That is to say that the process of rigorously protecting privacy is not sufficient granted the overwhelming changes that will take place in the years to come.

Brin draws heavily on Steve Mann’s concept of sousveillance, a process through which ordinary people could observe the actions of the rich and powerful so as to counter the power of the state or the corporation to observe the individual. The basic assumption behind sousveillance is that there is no means of arresting the development of technologies for surveillance and that those with wealth and power will be able to deploy such technologies more effectively than ordinary citizens. Therefore the only possible response to increased surveillance is to create a system of mutual monitoring to assure symmetry, if not privacy. Although the constitution of information does not assume that a system that allows the ordinary citizen to monitor the actions of those in power is necessary, the importance of creating information systems that monitor all information in a 360-degree manner should be seriously considered as part of a constitution of information. The one motive for a constitution of information is to undo the destructive process of designating information as classified and blocking off reciprocity and accountability on a massive scale. We must assure that multiple parties are involved in that process of controlling information so as to assure its accuracy and limit its abuse.

In order to achieve the goal of assuring accuracy, transparency and accountability on a global scale, but avoid massive institutional abuse of the power over information that is granted, we must create a system for monitoring information with a balance of powers at the center. Brin suggests a rather primitive system in which the ruled balance out the power of rulers through an equivalent system for observing and monitoring that works from below. I am skeptical that such a system will work unless we create large and powerful institutions within government (or the private sector) itself that have a functional need to check the power of other institutions.

Perhaps it is possible to establish a complex balance of powers wherein information is monitored and its abuses can be controlled, or punished, according to a meticulous, painfully negotiated, agreement between stakeholders. It could be that ultimately information would be governed by three branches of government, something like the legislative, executive, and judicial systems that has served well for many constitution-based governments. The branches assigned different tasks and authority within this system for monitoring information must have built into their organizations set conflicts of interest and approach in accord with the theory of the “balance of power” to assure that they limited the power of the other branches.

The need to assure accuracy may ultimately be a more essential task than the need to protect privacy. The general acceptance of inaccurate descriptions of state of affairs, or of individuals, is a profoundly damaging and cannot be easily rectified. For this reason, I suggest as part of the three branches of government, a “three keys” system for the management of information be adopted. That is to say that sensitive information will be accessible—otherwise we cannot assure that information will be accurate—but that information can only be accessed when three keys representing the three branches of government are presented. That process would assure that accountability can be maintained because three institutions whose interests are not necessarily aligned must be present to access that information.

Systems for the gathering, analysis and control of information on a massive scale have already reached a high level of sophistication. What is sadly lacking is a larger vision of how information should be treated for the sake of our society. Most responses to the information revolution have been extremely myopic, dwelling on the abuse of information by corporations or intelligence agencies without considering the structural and technological background of those abuses. To merely attribute the misuse of information to a lack of human virtue is to miss the profound shifts sweeping through our society today.

The constitution of information will be fundamentally different than most constitutions in that it must contain both rigidity in terms of holding all parties to the same standards and also considerable flexibility in that it can readily adapt to new situations resulting from rapid technological change. The rate at which information can be stored and manipulated will continue to increase and new horizons and issues will emerge,perhaps more quickly than expected. For this reason, the constitution of information cannot be overly static and must derive much of its power from its vision.

Structure of an Information Accuracy System

We can imagine a legislative body to represent all the elements of the information community engaged in the regulation of the traffic and the quality of information as well as individuals and NGOs. It would be a mistake to assume that the organizations represented in that “legislature” would necessarily be nation states according to the United Nations formulation of global governance. The limits of the nation state concept with regards to information policy are increasingly obvious and this constitutional convention could serve as an opportunity to address the massive institutional changes that have taken place over the past fifty years. It would be more meaningful, in my opinion, to make the members companies, organizations, networks, local government, a broad range of organizations that make the actual decisions concerning the creation, distribution and reception of information. That part of the information security system would only be “legislative” in a conceptual sense. It would not necessarily have meetings or be composed of elected or appointed representatives. In fact, if we consider the fact that the actual physical meetings of government legislatures around the world have become but rituals, we can sense that there the whole concept of the legislative process requires much modification.

The executive branch of the new information accuracy system would be charged with administrating the policies based on the legislative branch’s policies. It would implement rules concerning information to preserve its integrity and prevent its misuse. The details of how information policy is carried out would be determined at the constitutional convention.

The executive would be checked not only by the legislative branch but also a judicial branch. The judicial branch would be responsible for formulating interpretations of the constitution with regards to an ever-changing environment for information, and also for assessing the appropriateness of actions taken by the executive and legislative.

The terms “executive,” “legislative” and “judicial” are meant more as placeholders in this initial discussion, rather than as actual concrete descriptions of the institutions to be established. The functioning of these units would be profoundly different from such branches of present local and national governments, or even international organizations like the United Nations. If anything, the constitution of information, in that information and its changing nature underlie all other institutions; will be a step forward towards a new approach to governance in general.

Conclusion

It would be irresponsible and rash to draft an “off the shelf” constitution of information that can be readily applied around the world to respond to the complex situation of information today. Although I accept that initial proposals for a constitution of information like this one may be dismissed as irrelevant and wrong-headed, I assert that as we enter an unprecedented age of information revolution and most of the assumptions that undergirded our previous governance systems based on physical geography and discrete domestic economies will be overturned, there will be a critical demand for new systems to address this crisis. This initial foray can help formulate the problems to be addressed and the format in which do to so in advance.

In order to effectively govern a new space that exists outside of our current governance systems (or in the interstices between systems), we must make new rules that can effectively govern that space and work to defend transparency and accuracy in the perfect storm born of the circulation and alteration of information. If information exists in a transnational or global space and affects people at that scale, then the governing institutions responsible for its regulation need to be transnational or global in scale. If unprecedented changes are required, then so be it. If all records for hundreds of years exist on line, then it will be entirely possible, as suggested in Margaret Atwood’s The Handmaid’s Tale, to alter all information in a single moment if there is not a constitution of information. But the solution must involve designing the institutions that will be used to govern information, thus bringing an inspiring vision to what we are doing. We must give a philosophical foundation for the regulation information and open up of new horizons for human society while appealing to our better angels.

Oddly, many assume that the world of policy must consist of the drafting turgid and mind-numbing documents in the specialized terminology of economists. But history also has moments such as the drafting of the United States constitution during which a small group of visionary individuals manage to meet up with government institutions to create an inspiring new vision of what is possible that are recorded in terse and inspiring language. That is what we need today with regards to information. To propose such an approach is not a misguided modern version of Neo-Platonism, but a chance to seize the initiative with regards to ineluctable change and put forth a vision, rather than responding to change.

[1]As is suggested in the article by Tony Romm “David Petraeus affair scandal highlights email privacy issues” (http://www.politico.com/news/stories/1112/83984.html#ixzz2CUML3RDy).
[2] http://www.idc.com/getdoc.jsp?containerId=prUS23177411#.UTL3bDD-H54
[3] Human genetic evolution is even slower.
[4]Peter Singer. “The Robotics Revolution” in Canadian International Council, December 11, 2012.
[5] http://fairerglobalization.blogspot.kr/2011/06/iceland-writes-information-age.html
[6]Brin, ‚David. The Transparent Society: Will Technology Force Us to Choose between Privacy and Freedom? New York: Basic Books, 1998.

YANKEE.BRAIN.MAP
The Brain Games Begin
Europe’s billion-Euro science-neuro Human Brain Project, mentioned here amongst machine morality last week, is basically already funded and well underway. Now the colonies over in the new world are getting hip, and they too have in the works a project to map/simulate/make their very own copy of the universe’s greatest known computational artifact: the gelatinous wad of convoluted electrical pudding in your skull.

The (speculated but not yet public) Brain Activity Map of America
About 300 different news sources are reporting that a Brain Activity Map project is outlined in the current administration’s to-be-presented budget, and will be detailed sometime in March. Hoards of journalists are calling it “Obama’s Brain Project,” which is stoopid, and probably only because some guy at the New Yorker did and they all decided that’s what they had to do, too. Or somesuch lameness. Or laziness? Deference? SEO?

For reasons both economic and nationalistic, America could definitely use an inspirational, large-scale scientific project right about now. Because seriously, aside from going full-Pavlov over the next iPhone, what do we really have to look forward to these days? Now, if some technotards or bible pounders monkeywrench the deal, the U.S. is going to continue that slide toward scientific… lesserness. So, hippies, religious nuts, and all you little sociopathic babies in politics: zip it. Perhaps, however, we should gently poke and prod the hard of thinking toward a marginally heightened Europhobia — that way they’ll support the project. And it’s worth it. Just, you know, for science.

Going Big. Not Huge, But Big. But Could be Massive.
Both the Euro and American flavors are no Manhattan Project-scale undertaking, in the sense of urgency and motivational factors, but more like the Human Genome Project. Still, with clear directives and similar funding levels (€1 billion Euros & $1–3 billion US bucks, respectively), they’re quite ambitious and potentially far more world changing than a big bomb. Like, seriously, man. Because brains build bombs. But hopefully an artificial brain would not. Spaceships would be nice, though.

Practically, these projects are expected to expand our understanding of the actual physical loci of human behavioral patterns, get to the bottom of various brain pathologies, stimulate the creation of more advanced AI/non-biological intelligence — and, of course, the big enchilada: help us understand more about our own species’ consciousness.

On Consciousness: My Simulated Brain has an Attitude?
Yes, of course it’s wild speculation to guess at the feelings and worries and conundrums of a simulated brain — but dude, what if, what if one or both of these brain simulation map thingys is done well enough that it shows signs of spontaneous, autonomous reaction? What if it tries to like, you know, do something awesome like self-reorganize, or evolve or something?

Maybe it’s too early to talk personality, but you kinda have to wonder… would the Euro-Brain be smug, never stop claiming superior education yet voraciously consume American culture, and perhaps cultivate a mild racism? Would the ‘Merica-Brain have a nation-scale authority complex, unjustifiable confidence & optimism, still believe in childish romantic love, and overuse the words “dude” and “awesome?”

We shall see. We shall see.

Oh yeah, have to ask:
Anyone going to follow Ray Kurzweil’s recipe?

Project info:
[HUMAN BRAIN PROJECT - - MAIN SITE]
[THE BRAIN ACTIVITY MAP - $ - HUFF-PO]

Kinda Pretty Much Related:
[BLUE BRAIN PROJECT]

This piece originally appeared at Anthrobotic.com on February 28, 2013.

Solving complex problems is one of the defining features of our age. The ability to harness a wide range of skills and synthesise diverse areas of knowledge is essentially integral to a researcher’s DNA. It is interesting to read how MIT first offered a class in ‘Solving Complex Problems’ back in 2000. Over the course of a semester students attempt to ‘imagineer’ a solution to a highly complex problem. There is a great need for this type of learning in our educational systems. If we are to develop people who can tackle the Grand Challenges of this epoch then we need to create an environment in which our brains are allowed to be wired differently through exposure to diverse areas of knowledge and methods of understanding reality across disciplines.

When I look at my niece who is only 4 years old I wonder how I can give her the best education, and prepare her to meet the challenges of this world, as she grows up in a world which fills my heart with great anxiety. It is fascinating to read about different educational approaches from Steiner education to Montessori education to developing curriculums and school design upon cognitive neuroscience and educational theory. However when I look at the thinkers of insight, and contrast it with educational policy in the developed world, there is quite clearly a huge disconnect between politics and science.

We need to develop a culture of complexity if we are to develop the ability and insight to solve complex problems. When we look at the world from the perspective of complexity it builds a very different mindset in how we think about the world, and how we go about trying to understand the world, and ultimately how we go about solving problems.


…here’s Tom with the Weather.
That right there is comedian/philosopher Bill Hicks, sadly no longer with us. One imagines he would be pleased and completely unsurprised to learn that serious scientific minds are considering and actually finding support for the theory that our reality could be a kind of simulation. That means, for example, a string of daisy-chained IBM Super-Deep-Blue Gene Quantum Watson computers from 2042 could be running a History of the Universe program, and depending on your solipsistic preferences, either you are or we are the character(s).

It’s been in the news a lot of late, but — no way, right?

Because dude, I’m totally real
Despite being utterly unable to even begin thinking about how to consider what real even means, the everyday average rational person would probably assign this to the sovereign realm of unemployable philosophy majors or under the Whatever, Who Cares? or Oh, That’s Interesting I Gotta Go Now! categories. Okay fine, but on the other side of the intellectual coin, vis-à-vis recent technological advancement, of late it’s actually being seriously considered by serious people using big words they’ve learned at endless college whilst collecting letters after their names and doin’ research and writin’ and gettin’ association memberships and such.

So… why now?

Well, basically, it’s getting hard to ignore.
It’s not a new topic, it’s been hammered by philosophy and religion since like, thought happened. But now it’s getting some actual real science to stir things up. And it’s complicated, occasionally obtuse stuff — theories are spread out across various disciplines, and no one’s really keeping a decent flowchart.

So, what follows is an effort to encapsulate these ideas, and that’s daunting — it’s incredibly difficult to focus on writing when you’re wondering if you really have fingers or eyes. Along with links to some articles with links to some papers, what follows is Anthrobotic’s CliffsNotes on the intersection of physics, computer science, probability, and evidence for/against reality being real (and how that all brings us back to well, God).
You know, light fare.

First — Maybe we know how the universe works: Fantastically simplified, as our understanding deepens, it appears more and more the case that, in a manner of speaking, the universe sort of “computes” itself based on the principles of quantum mechanics. Right now, humanity’s fastest and sexiest supercomputers can simulate only extremely tiny fractions of the natural universe as we understand it (contrasted to the macro-scale inferential Bolshoi Simulation). But of course we all know the brute power of our computational technology is increasing dramatically like every few seconds, and even awesomer, we are learning how to build quantum computers, machines that calculate based on the underlying principles of existence in our universe — this could thrust the game into superdrive. So, given ever-accelerating computing power, and given than we can already simulate tiny fractions of the universe, you logically have to consider the possibility: If the universe works in a way we can exactly simulate, and we give it a shot, then relatively speaking what we make ceases to be a simulation, i.e., we’ve effectively created a new reality, a new universe (ummm… God?). So, the question is how do we know that we haven’t already done that? Or, otherwise stated: what if our eventual ability to create perfect reality simulations with computers is itself a simulation being created by a computer? Well, we can’t answer this — we can’t know. Unless…
[New Scientist’s Special Reality Issue]
[D-Wave’s Quantum Computer]
[Possible Large-scale Quantum Computing]

Second — Maybe we see it working: The universe seems to be metaphorically “pixelated.” This means that even though it’s a 50 billion trillion gajillion megapixel JPEG, if we juice the zooming-in and drill down farther and farther and farther, we’ll eventually see a bunch of discreet chunks of matter, or quantums, as the kids call them — these are the so-called pixels of the universe. Additionally, a team of lab coats at the University of Bonn think they might have a workable theory describing the underlying lattice, or existential re-bar in the foundation of observable reality (upon which the “pixels” would be arranged). All this implies, in a way, that the universe is both designed and finite (uh-oh, getting closer to the God issue). Even at ferociously complex levels, something finite can be measured and calculated and can, with sufficiently hardcore computers, be simulated very, very well. This guy Rich Terrile, a pretty serious NASA scientist, sites the pixelation thingy and poses a video game analogy: think of any first-person shooter — you cannot immerse your perspective into the entirety of the game, you can only interact with what is in your bubble of perception, and everywhere you go there is an underlying structure to the environment. Kinda sounds like, you know, life — right? So, what if the human brain is really just the greatest virtual reality engine ever conceived, and your character, your life, is merely a program wandering around a massively open game map, playing… well, you?
[Lattice Theory from the U of Bonn]
[NASA guy Rich Terrile at Vice]
[Kurzweil AI’s Technical Take on Terrile]

Thirdly — Turns out there’s a reasonable likelihood: While the above discussions on the physical properties of matter and our ability to one day copy & paste the universe are intriguing, it also turns out there’s a much simpler and straightforward issue to consider: there’s this annoyingly simplistic yet valid thought exercise posited by Swedish philosopher/economist/futurist Nick Bostrum, a dude way smarter that most humans. Basically he says we’ve got three options: 1. Civilizations destroy themselves before reaching a level of technological prowess necessary to simulate the universe; 2. Advanced civilizations couldn’t give two shits about simulating our primitive minds; or 3. Reality is a simulation. Sure, a decent probability, but sounds way oversimplified, right?
Well go read it. Doing so might ruin your day, JSYK.
[Summary of Bostrum’s Simulation Hypothesis]

Lastly — Data against is lacking: Any idea how much evidence or objective justification we have for the standard, accepted-without-question notion that reality is like, you know… real, or whatever? None. Zero. Of course the absence of evidence proves nothing, but given that we do have decent theories on how/why simulation theory is feasible, it follows that blithely accepting that reality is not a simulation is an intrinsically more radical position. Why would a thinking being think that? Just because they know it’s true? Believing 100% without question that you are a verifiably physical, corporeal, technology-wielding carbon-based organic primate is a massive leap of completely unjustified faith.
Oh, Jesus. So to speak.

If we really consider simulation theory, we must of course ask: who built the first one? And was it even an original? Is it really just turtles all the way down, Professor Hawking?

Okay, okay — that means it’s God time now
Now let’s see, what’s that other thing in human life that, based on a wild leap of faith, gets an equally monumental evidentiary pass? Well, proving or disproving the existence of god is effectively the same quandary posed by simulation theory, but with one caveat: we actually do have some decent scientific observations and theories and probabilities supporting simulation theory. That whole God phenomenon is pretty much hearsay, anecdotal at best. However, very interestingly, rather than negating it, simulation theory actually represents a kind of back-door validation of creationism. Here’s the simple logic:

If humans can simulate a universe, humans are it’s creator.
Accept the fact that linear time is a construct.
The process repeats infinitely.
We’ll build the next one.
The loop is closed.

God is us.

Heretical speculation on iteration
Even wonder why older polytheistic religions involved the gods just kinda setting guidelines for behavior, and they didn’t necessarily demand the love and complete & total devotion of humans? Maybe those universes were 1st-gen or beta products. You know, like it used to take a team of geeks to run the building-sized ENIAC, the first universe simulations required a whole host of creators who could make some general rules but just couldn’t manage every single little detail.

Now, the newer religions tend to be monotheistic, and god wants you to love him and only him and no one else and dedicate your life to him. But just make sure to follow his rules, and take comfort that your’re right and everyone else is completely hosed and going to hell. The modern versions of god, both omnipotent and omniscient, seem more like super-lonely cosmically powerful cat ladies who will delete your ass if you don’t behave yourself and love them in just the right way. So, the newer universes are probably run as a background app on the iPhone 26, and managed by… individuals. Perhaps individuals of questionable character.

The home game:
Latest title for the 2042 XBOX-Watson³ Quantum PlayStation Cube:*
Crappy 1993 graphic design simulation: 100% Effective!

*Manufacturer assumes no responsibility for inherently emergent anomalies, useless
inventions by game characters, or evolutionary cul de sacs including but not limited to:
The duck-billed platypus, hippies, meat in a can, reality TV, the TSA,
mayonaise, Sony VAIO products, natto, fundamentalist religious idiots,
people who don’t like homos, singers under 21, hangovers, coffee made
from cat shit, passionfruit iced tea, and the pacific garbage patch.

And hey, if true, it’s not exactly bad news
All these ideas are merely hypotheses, and for most humans the practical or theoretical proof or disproof would probably result in the same indifferent shrug. For those of us who like to rub a few brain cells together from time to time, attempting to both to understand the fundamental nature of our reality/simulation, and guess at whether or not we too might someday be capable of simulating ourselves, well — these are some goddamn profound ideas.

So, no need for hand wringing — let’s get on with our character arc and/or real lives. While simulation theory definitely causes reflexive revulsion, “just a simulation” isn’t necessarily pejorative. Sure, if we take a look at the current state of our own computer simulations and A.I. constructs, it is rather insulting. So if we truly are living in a simulation, you gotta give it up to the creator(s), because it’s a goddamn amazing piece of technological achievement.

Addendum: if this still isn’t sinking in, the brilliant
Dinosaur Comics might do a better job explaining:

(This post originally published I think like two days
ago at technosnark hub www.anthrobotic.com.
)