Toggle light / dark theme

(Excerpt)

Beyond the managerial challenges (downside risks) presented by the exponential technologies as it is understood in the Technological Singularity and its inherent futuristic forces impacting the present and the future now, there are also some grave global risks that many forms of management have to tackle with immediately.

These grave global risks have nothing to do with advanced science or technology. Many of these hazards stem from nature and some are, as well, man made.

For instance, these grave global risks ─ embodying the Disruptional Singularity ─ are geological, climatological, political, geopolitical, demographic, social, economic, financial, legal and environmental, among others. The Disruptional Singularity’s major risks are gravely threatening us right now, not later.

Read the full document at http://lnkd.in/bYP2nDC

The Future of Scientific Management, Today! (Excerpt)

Transformative and Integrative Risk Management
Andres Agostini was asked this question:

Mr. David Shaw’s question, “…Andres, from your work on the future which management skills need to be developed? Classically the management role is about planning, organizing, leading and controlling. With the changes coming in the future what’s your view on how this management mix needs to change and adapt?…” Question was posited on an Internet Forum, formulated by Mr. David Shaw (Peterborough, United Kingdom) on October 09, 2013.

This is an excerpt from, “…The Future of Scientific Management, Today…” that discusses state-of-the-art management theories and practices. To read the entire piece, just click the link at the end of article.

CONCLUSION

In addition to being aware and adaptable and resilient before the driving forces reshaping the current present and the as-of-now future, THERE ARE SOME EXTRA MANAGEMENT SUGGESTIONS THAT I CONCURRENTLY PRACTICE:

1.- Given the vast amount of insidious risks, futures, challenges, principles, processes, contents, practices, tools, techniques, benefits and opportunities, there needs to be a full-bodied practical and applicable methodology (methodologies are utilized and implemented to solve complex problems and to facilitate the decision-making and anticipatory process).

The manager must always address issues with a Panoramic View and must also exercise the envisioning of both the Whole and the Granularity of Details, along with the embedded (corresponding) interrelationships and dynamics (that is, [i] interrelationships and dynamics of the subtle, [ii] interrelationships and dynamics of the overt and [iii] interrelationships and dynamics of the covert).

Both dynamic complexity and detail complexity, along with fuzzy logic, must be pervasively considered, as well.

To this end, it is wisely argued, …You can’t understand the knot without understanding the strands, but in the future, the strands need not remain tied up in the same way as they are today…”

For instance, disparate skills, talents, dexterities and expertise won’t suffice ever. A cohesive and congruent, yet proven methodology (see the one above) must be optimally implemented.

Subsequently, the Chinese proverb indicates, …Don’t look at the waves but the currents underneath…”

2.- One must always be futurewise and technologically fluent. Don’t fight these extreme forces, just use them! One must use counter-intuitiveness (geometrically non-linearly so), insight, hindsight, foresight and far-sight in every day of the present and future (all of this in the most staggeringly exponential mode). To shed some light, I will share two quotes.

The Panchatantra (body of Eastern philosophical knowledge) establishes, …Knowledge is the true organ of sight, not the eyes.…” And Antonio Machado argues, … An eye is not an eye because you see it; an eye is an eye because it sees you …”

Managers always need a clear, knowledgeable vision. Did you already connect the dots stemming from the Panchatantra and Machado? Did you already integrate those dots into your big-picture vista?

As side effect, British Prime Minister W. E. Gladstone considered, …You cannot fight against the future…”

THE METHOD

3.- In all the Manager does, he / she must observe and apply, at all times, a sine qua non maxim, …everything is related to everything else…”

4.- Always manage as if it were a “project.” Use, at all times, the “…Project Management…” approach.

5.- Always use the systems methodology with the applied omniscience perspective.

In this case, David, I mean to assert: The term “Science” equates to about a 90% of “…Exact Sciences…” and to about 10% of “…Social Sciences…” All science must be instituted with the engineering view.

6.- Always institute beyond-insurance risk management as you boldly integrate it with your futuring skill / expertise.

7.- In my firmest opinion, the following must be complied this way (verbatim): the corporate strategic planning and execution (performing) are a function of a grander application of beyond-insurance risk management.It will never work well the other way around. TAIRM is the optimal mode to do advanced strategic planning and execution (performing).

TAIRM (Transformative and Integrative Risk Management) is not only focused on terminating, mitigating and modulating risks (expenses of treasure and losses of life), but also concentrated on bringing under control fiscally-sound, sustainable organizations and initiatives.

TAIRM underpins sensible business prosperity and sustainable growth and progress.

8.- I also believe that we must pragmatically apply the scientific method in all we manage to the best of our capacities.

If we are “…MANAGERS…” in a Knowledge Economy and Knowledge Era (not a knowledge-driven eon because of superficial and hollow caprices of the follies and simpletons), we must do therefore extensive and intensive learning and un-learning for Life if we want to succeed and be sustainable.

As a consequence, Dr. Noel M. Tichy, PhD. argues, …Today, intellectual assets trump physical assets in nearly every industry…”

Consequently, Alvin Toffler indicates, …In the world of the future, THE NEW ILLITERATE WILL BE THE PERSON WHO HAS NOT LEARNED TO LEARN…”

We don’t need to be scientists to learn some basic principles of advanced science.

EFFORT

Accordingly, Dr. Carl Sagan, PhD. expressed, …We live in a society exquisitely dependent on science and technology, in which hardly anyone knows about science and technology…” And Edward Teller stated,…The science of today is the technology of tomorrow …”

And it is also crucial this quotation by Winston Churchill, …If we are to bring the broad masses of the people in every land to the table of abundance, IT CAN ONLY BE BY THE TIRELESS IMPROVEMENT OF ALL OF OUR MEANS OF TECHNICAL PRODUCTION…”

9.- In any management undertaking, and given the universal volatility and rampant and uninterrupted rate of change, one must think and operate in a fluid womb-to-tomb mode.

The manager must think and operate holistically (both systematically and systemically) at all times.

The manager must also be: i) Multidimensional, ii) Interdisciplinary, iii) Multifaceted, iv) Cross-functional, and v) Multitasking.

That is, the manager must now be an expert state-of-the-art generalist and erudite. ERGO, THIS IS THE NEWEST SPECIALIST AND SPECIALIZATION.

Managers must never manage elements, components or subsystems separately or disparately (that is, they mustn’t ever manage in series).

Managers must always manage all of the entire system at the time (that is, managing in parallel or simultaneously the totality of the whole at once).

10.- In any profession, beginning with management, one must always and cleverly upgrade his / her learning and education until the last exhale.

An African proverb argues, …Tomorrow belongs to the people who prepare for it…” And Winston Churchill established,…The empires of the future are the empires of the mind…” And an ancient Chinese Proverb: …It is not our feet that move us along — it is our minds…”

And Malcolm X observed,…The future belongs to those who prepare for it today…” And Leonard I. Sweet considered, …The future is not something we enter. The future is something we create…”

And finally, James Thomson argued, …Great trials seem to be a necessary preparation for great duties …”

The entire document is available at http://lnkd.in/bYP2nDC

Futurewise Success Tenets

“Futurewise Success Tenets” here is an excerpt from, “The Future of Scientific Management, Today”. To read the entire piece, just click the link at the end of article. As follows:

(1) Picture mentally, radiantly. (2) Draw outside the canvas. (3) Color outside the vectors. (4) Sketch sinuously. (5) Far-sight beyond the mind’s intangible exoskeleton. (6) Abduct indiscernible falsifiable convictions. (7) Reverse-engineering a gene and a bacterium or, better yet, the lucrative genome. (8) Guillotine the over-weighted status quo. (9) Learn how to add up ─ in your own brainy mind ─ colors, dimensions, aromas, encryptions, enigmas, phenomena, geometrical and amorphous in-motion shapes, methods, techniques, codes, written lines, symbols, contexts, locus, venues, semantic terms, magnitudes, longitudes, processes, tweets, “…knowledge-laden…” hunches and omniscient bliss, so forth. (10) Project your wisdom’s wealth onto communities of timeless-connected wikis. (11) Cryogenize the infamous illiterate by own choice and reincarnate ASAP (multiverse teleporting out of a warped / wormed passage) Da Vinci, Bacon, Newton, Goethe, Bonaparte, Edison, Franklyn, Churchill, Einstein, and Feynman. (12) Organize relationships into voluntary associations that are mutually beneficial and accountable for contributing productively to the surrounding community. (13) Practice the central rule of good strategy, which is to know and remain true to your core business and invest for leadership and R&D+Innovation. (14) Kaisen, SixSigma, Lean, LeanSigma, “…Reliability Engineer…” (the latter as solely conceived and developed by Procter & Gamble and Los Alamos National Laboratories) it all unthinkably and thoroughly by recombinant, a là Einstein Gedanke-motorized judgment (that is to say: Einsteinian Gedanke [“…thought experiments…”]. (15) Provide a road-map / blueprint for drastically compressing (‘crashing’) the time’s ‘reticules’ it will take you to get on the top of your tenure, nonetheless of your organizational level. (16) With the required knowledge and relationships embedded in organizations, create support for, and carry out transformational initiatives. (17) Offer a tested pathway for addressing the linked challenges of personal transition and organizational transformation that confront leaders in the first few months in a new tenure. (18) Foster momentum by creating virtuous cycles that build credibility and by avoiding getting caught in vicious cycles that harm credibility. (19) Institute coalitions that translate into swifter organizational adjustments to the inevitable streams of change in personnel and environment. (20) Mobilize and align the overriding energy of many others in your organization, knowing that the “…wisdom of crowds…” is upfront and outright rubbish. (21) Step outside the boundaries of the framework’s system when seeking a problem’s solution. (22) Within zillion tiny bets, raise the ante and capture the documented learning through frenzy execution. (23) “…Moonshine…” and “…Skunks-work…” and “…Re-Imagineering…” all, holding in your mind the motion-picture image that, regardless of the relevance of “…inputs…” and “…outputs,…”, entails that the highest relevance is within the sophistication within the THROUGHPUT.….. (69) Figure out exactly which neurons to make synapses with. (70) Wire up synapses the soonest…”

Read the full material at http://lnkd.in/bYP2nDC

Regards,

Mr. Andres Agostini
www.linkedin.com/in/AndresAgostini

Zach Urbina, solar science, SDO

After nearly six months of relative quiet, our parent star, the sun, awoke. Recent predictions from leading solar scientists ranged from “cycle 24 will be our weakest yet” to “cycle 24 is quiet now, because it will be double peaked.” It appears that the latter is emerging as the clearer truth.

Over the course of a week, between October 24th and 31st, more than 28 substantial flares fired off from the sun. Several of the more recent flares sent massive clouds of ionized particulate matter, called coronal mass ejections, toward Earth.

Four of the recent flares were X-class solar flares, the strongest on the scale, erupting from the photosphere of the sun, causing minor radio blackouts, and sending coronal mass ejections in many different directions, including toward Earth.

Unfortunately, due to the recent US government shutdown, the suite of tools generally available to the public for space weather prediction were offline. As of November 2nd, there remains a backlog of missing data for several sets of publically available apps and internet resources for space weather prediction and observation.

This period of excited solar activity comes of the heals of recently published science that revealed no discernable connection between the planetary rotation period of Jupiter (11.87 years) and the length of solar cycles (variable between 9 and 12 years). This may sound intuitively obvious, but holdouts from several corners of the scientific world have sought to ascribe significance to the similarities between both cycles. Some claimed that the much less massive Jupiter somehow caused solar dynamic activity.

Published analyses of radioactive isotopes of beryllium and carbon in 10,000 year-old ice core samples recently dismissed the similarities between both periods as being consistent with chance and statistically insignificant. This leaves open the possibility that the solar dynamo is indeed self-excited, in a process whose predictive models are still being tested and perfected, called the meridional flow.

Meridional flow moves solar material just beneath the apparent subsurface of the sun (called the photosphere). Models for the meridional flow have proven difficult to hammer down with predictive certainty, but NASA’s Dr. David Hathaway and a number of other leading solar scientists are moving closer to understanding the dynamic forces that drive the activity of our parent star.

Radar imagery from SDO, taken every 45 seconds over the past two years, was recently analyzed as well. The results of that analysis revealed that the current models, which predict the rate of meridional flow are off by at least half. In short, the conveyor belt-like process of plasma that returns material to the photosphere of the sun moves more quickly than originally theorized. This misunderstanding might have contributed to the lower than normal forecast for solar cycle 24. Time will tell if the sun is truly tapering off its maximum output for this cycle, or if more activity is coming Earth’s way.

The NASA/ESA heliophysics fleet currently observing the sun is comprised of nearly twenty spacecraft in various orbits measuring not only our star, but the interstellar space between it and Earth as well as the intricate space weather system that interacts constantly with our planet.

One of the most exciting moments in solar sciences comes when an Earth-directed coronal mass ejection collides with the Earth’s magnetic field to the degree of causing a geomagnetic storm. The Earth’s magnetic field is fully capable of protecting our planet from the occasional glancing blow from the sun; however, strong clouds of magnetized plasma can often find their way into Earth’s atmosphere, causing minor interference with electrical grids as well as the constellation of GPS satellites.

The Carrington Event of 1859 and the Manitoba Blackout of 1989, revealed that the Earth is indeed vulnerable to space weather events. There have been calls, falling largely upon the deaf ears of US legislators, that want the electrical grid of the United States to be fully retrofitted with radiation hardened components that could handle the surges associated with geomagnetic storms. The problem will not go away and unlike global warming and other self-destructive, human propagated phenomena, there is little we can due to curtail such activity, other than being aware and prepared.

As the number of active regions on the apparent surface of the sun increase, we are likely to experience more geo-effective activity in the coming weeks and months. The phase of the solar cycle remains high and will gently curtail overall activity as solar maximum wanes into the lull of solar minimum.

For solar science enthusiasts, including this writer, this period of solar activity is an ideal time to better understand the dynamic interactions that the sun has with Earth. It is only in the last 10 to 15 years that we’ve understood our parent star to be a dynamic system, not as predictable as we’d assumed it to be in the nearly 400 years of solar science observations.

Deepening the scientific understanding about our parent star is as much about protecting Earth, as it is about examining the interconnected nature of the Earth-Sun space weather system. It behooves all of humanity to keep apprised of this connection as we establish new laws and more accurately understand our extended natural environment.

http://www.ardmediathek.de/wdr-fernsehen/quarks-und-co/quarks-und-co-auf-teilchenjagd-ranga-yogeshwar-am-cern?documentId=17482362

I learned from it about the unfathomable degree of social coherence among the many thousand scientists and engineers whose synergy makes this largest constructive effort of humankind since the pyramids possible. And it also revealed the wonderful spirit of Peter Higgs who is a loving mind in the old sense of a devout scientist.

Ranga Yogeshwar here made it clear to me for the first time WHY this brave community could not respond to a novelty that would have destroyed their cohesion. The effort was too big to be disturbed even for a few days of “second thoughts.” For that it was too late from the beginning.

So CERN’s public refusal to update its “safety report” for more than 5 years is part and parcel of the beauty of the new pyramid (the word means “immortality”). Imagine the pyramids’ construction having been disturbed by a news that interfered with its political and divine purpose: This would have meant the end of the whole effort and the civilization behind it.

So I apologize to CERN: “Dear admired colleagues, please, forgive me that I tried to disturb your super-human effort by insisting on a safety update!”

Ordinary mortals have no right to interfere with a divine project. If the devotion behind it implies accepting a harsh fate for the community at large, the Egyptian/Aztec rationality of a large number of dedicated individuals cannot possibly be tampered with. It is not religion: it is collective dynamics – chaos theory – or as my colleague Hermann Haken calls it, Synergetics. “When the flag is flying the brain is in the trumpet.” The rules of rationality are suspended in this case according to the Bible.

Poor Einstein, poor dear Peter Higgs, poor humankind. Hopefully, someone jumps upon the 5 years.
.

Science is based on dialog. Not a single colleague including Hawking and his crew contradicts my safety-relevant finding for 5 years. This fact can only have one of two reasons:
(1) The finding is so embarrassingly stupid that to take it up would suffice to soil the respondent.
(2) The finding harks back so deeply to the young Einstein that it causes fear to tread upon.

One country – my own – singlehandedly left CERN in response (only to surreptitiously return under non-publicized pressure).

Obviously the offered result (“gothic-R theorem”) has historical dimensions. Everyone’s survival is affected by it probability-wise if it is valid. We “live in an interesting time” as the Chinese proverb goes.

800 newspapers reported on the theorem in 2008. Only Aljazeera remained four years later. There is a “curfew of silence” obeyed by my colleagues. Not a single counterargument is in the literature against my result published in 2008 and its many sequels.

Why is CERN’s official “Safety Report” allowed to go un-updated for 5 years? And why is the friendly admonition by a court given to CERN standing before it to, please, admit a “safety conference” a planet-wide taboo topic even after almost three years? Refusing dialog and discussion is one thing, refusing updating on safety is another. In the one case but a single person is the victim, in the other it is you and you and you.

The most recent prime-time movie “Heroes – the Fate of the World in their Hands,” aired last Thursday by RTL ( http://www.imdb.com/title/tt2324130/plotsummary?ref_=tt_stry_pl ), struck me by total surprise. Everyone can spot the movie’s weaknesses (rating 1.7), but: Imagine the loving courage that stands behind it!

CERN cannot remain silent any longer.

P.S.: I cordially congratulate Peter Higgs to his Nobel prize which he fully deserves along with his colleague. And I ask him for the great honor of replying to this blog post which congratulates him first.

Peer-to-Peer Science

The Century-Long Challenge to Respond to Fukushima

Emanuel Pastreich (Director)

Layne Hartsell (Research Fellow)

The Asia Institute

More than two years after an earthquake and tsunami wreaked havoc on a Japanese power plant, the Fukushima nuclear disaster is one of the most serious threats to public health in the Asia-Pacific, and the worst case of nuclear contamination the world has ever seen. Radiation continues to leak from the crippled Fukushima Daiichi site into groundwater, threatening to contaminate the entire Pacific Ocean. The cleanup will require an unprecedented global effort.

Initially, the leaked radioactive materials consisted of cesium-137 and 134, and to a lesser degree iodine-131. Of these, the real long-term threat comes from cesium-137, which is easily absorbed into bodily tissue—and its half-life of 30 years means it will be a threat for decades to come. Recent measurements indicate that escaping water also has increasing levels of strontium-90, a far more dangerous radioactive material than cesium. Strontium-90 mimics calcium and is readily absorbed into the bones of humans and animals.

The Tokyo Electric Power Company (TEPCO) recently announced that it lacks the expertise to effectively control the flow of radiation into groundwater and seawater and is seeking help from the Japanese government. TEPCO has proposed setting up a subterranean barrier around the plant by freezing the ground, thereby preventing radioactive water from eventually leaking into the ocean—an approach that has never before been attempted in a case of massive radiation leakage. TEPCO has also proposed erecting additional walls now that the existing wall has been overwhelmed by the approximately 400 tons per day of water flowing into the power plant.

But even if these proposals were to succeed, they would not constitute a long-term solution.

A New Space Race

Solving the Fukushima Daiichi crisis needs to be considered a challenge akin to putting a person on the moon in the 1960s. This complex technological feat will require focused attention and the concentration of tremendous resources over decades. But this time the effort must be international, as the situation potentially puts the health of hundreds of millions at risk. The long-term solution to this crisis deserves at least as much attention from government and industry as do nuclear proliferation, terrorism, the economy, and crime.

To solve the Fukushima Daiichi problem will require enlisting the best and the brightest to come up with a long-term plan to be implemented over the next century. Experts from around the world need to contribute their insights and ideas. They should come from diverse fields—engineering, biology, demographics, agriculture, philosophy, history, art, urban design, and more. They will need to work together at multiple levels to develop a comprehensive assessment of how to rebuild communities, resettle people, control the leakage of radiation, dispose safely of the contaminated water and soil, and contain the radiation. They will also need to find ways to completely dismantle the damaged reactor, although that challenge may require technologies not available until decades from now.

Such a plan will require the development of unprecedented technologies, such as robots that can function in highly radioactive environments. This project might capture the imagination of innovators in the robotics world and give a civilian application to existing military technology. Improved robot technology would prevent the tragic scenes of old people and others volunteering to enter into the reactors at the risk of their own wellbeing.

The Fukushima disaster is a crisis for all of humanity, but it is a crisis that can serve as an opportunity to construct global networks for unprecedented collaboration. Groups or teams aided by sophisticated computer technology can start to break down into workable pieces the immense problems resulting from the ongoing spillage. Then experts can come back with the best recommendations and a concrete plan for action. The effort can draw on the precedents of the Intergovernmental Panel on Climate Change, but it must go far further.

In his book Reinventing Discovery: The New Era of Networked Science, Michael Nielsen describes principles of networked science that can be applied on an unprecedented scale. The breakthroughs that come from this effort can also be used for other long-term programs such as the cleanup of the BP Deepwater Horizon oil spill in the Gulf of Mexico or the global response to climate change. The collaborative research regarding Fukushima should take place on a very large scale, larger than the sequencing of the human genome or the maintenance of the Large Hadron Collider.

Finally, there is an opportunity to entirely reinvent the field of public diplomacy in response to this crisis. Public diplomacy can move from a somewhat ambiguous effort by national governments to repackage their messaging to a serious forum for debate and action on international issues. As public diplomacy matures through the experience of Fukushima, we can devise new strategies for bringing together hundreds of thousands of people around the world to respond to mutual threats. Taking a clue from networked science, public diplomacy could serve as a platform for serious, long-term international collaboration on critical topics such as poverty, renewable energy, and pollution control.

Similarly, this crisis could serve as the impetus to make social networking do what it was supposed to do: help people combine their expertise to solve common problems. Social media could be used not as a means of exchanging photographs of lattes and overfed cats, but rather as an effective means of assessing the accuracy of information, exchanging opinions between experts, forming a general consensus, and enabling civil society to participate directly in governance. With the introduction into the social media platform of adequate peer review—such as that advocated by the Peer-to-Peer Foundation (P2P)—social media can play a central role in addressing the Fukushima crisis and responding to it. As a leader in the P2P movement, Michel Bauwens, suggests in an email, “peers are already converging in their use of knowledge around the world, even in manufacturing at the level of computers, cars, and heavy equipment.”

Here we may find the answer to the Fukushima conundrum: open the problem up to the whole world.

Peer-to-Peer Science

Making Fukushima a global project that seriously engages both experts and common citizens in the millions, or tens of millions, could give some hope to the world after two and a half years of lies, half-truths, and concerted efforts to avoid responsibility on the part of the Japanese government and international institutions. If concerned citizens in all countries were to pore through the data and offer their suggestions online, there could be a new level of transparency in the decision-making process and a flourishing of invaluable insights.

There is no reason why detailed information on radiation emissions and the state of the reactors should not be publicly available in enough detail to satisfy the curiosity of a trained nuclear engineer. If the question of what to do next comes down to the consensus of millions of concerned citizens engaged in trying to solve the problem, we will have a strong alternative to the secrecy that has dominated so far. Could our cooperation on the solution to Fukushima be an imperative to move beyond the existing barriers to our collective intelligence posed by national borders, corporate ownership, and intellectual property concerns?

A project to classify stars throughout the university has demonstrated that if tasks are carefully broken up, it is possible for laypeople to play a critical role in solving technical problems. In the case of Galaxy Zoo, anyone who is interested can qualify to go online and classify different kinds of stars situated in distant galaxies and enter the information into a database. It’s all part of a massive effort to expand our knowledge of the universe, which has been immensely successful and demonstrated that there are aspects of scientific analysis that does not require a Ph.D. In the case of Fukushima, if an ordinary person examines satellite photographs online every day, he or she can become more adept than a professor in identifying unusual flows carrying radioactive materials. There is a massive amount of information that requires analysis related to Fukushima, and at present most of it goes virtually unanalyzed.

An effective response to Fukushima needs to accommodate both general and specific perspectives. It will initially require a careful and sophisticated setting of priorities. We can then set up convergence groups that, aided by advanced computation and careful efforts at multidisciplinary integration, could respond to crises and challenges with great effectiveness. Convergence groups can also serve as a bridge between the expert and the layperson, encouraging a critical continuing education about science and society.

Responding to Fukushima is as much about educating ordinary people about science as it is about gathering together highly paid experts. It is useless for experts to come up with novel solutions if they cannot implement them. But implementation can only come about if the population as a whole has a deeper understanding of the issues. Large-scale networked science efforts that are inclusive will make sure that no segments of society are left out.

If the familiar players (NGOs, central governments, corporations, and financial institutions) are unable to address the unprecedented crises facing humanity, we must find ways to build social networks, not only as a means to come up with innovative concepts, but also to promote and implement the resulting solutions. That process includes pressuring institutions to act. We need to use true innovation to pave the way to an effective application of science and technology to the needs of civil society. There is no better place to start than the Internet and no better topic than the long-term response to the Fukushima disaster.

Originally published in Foreign Policy in Focus on September 3, 2013

Recent discussions on the properties of micro-black-holes threw open sufficient question to reignite some interest in the subject (pardon to those exhausted of reading on the subject here at the Lifeboat Foundation). A claim made by physicists at the University of Innsbruck in Austria, that a new attractive force arises from black-body radiation [1] makes one speculate if a similar effect could result from hawking radiation theorized to be emitted from micro-black-holes. An unlikely scenario due to the very different nature supposed on hawking radiation and black-body radiation, but a curious thought none-the-less. If a light component of hawking radiation could replicate this net attractive force, accepted accretion and radiation rates could be revised to consider such new additional forces hypothesized.

Not so fast — Even if such a new force did take effect in these scenarios, one would expect such to have negligible impact on safety assurances. Official estimated accretion rates are many many orders of magnitude lower than estimated radiation rates — and are estimates which concur with observational evidence in the longevity of white-dwarf stars.

That is not to conclude such new forces are necessary to continue debate. Certain old disputed parameter ranges suggest different accretion rates relative to radiative rates which could bridge that vast breadth between such estimates, theorizing catastrophic outcomes [3] are not necessarily refuted by safety assurances — least on white-dwarf longevity.

Indeed a more pertinent point, that if equilibrium could manifest between radiation and accretion rates, micro-black-holes trapped in Earth’s gravitation could become persistent heat engines with considerable flux [2] to cause environmental concern in planetary heating.

Meanwhile, that stalwart safety assurance on micro-black-hole accretion risks, the longevity of white dwarf stars, finds new argument where the law of angular momentum conservation is considered as a significant factor in negating the G&M [4] calculated stopping distances of naturally occurring micro-black-holes on white dwarf stars due to it enforcing an immediate disengagement on striking quarks at such near-luminal speeds — this unlike LHC produced micro-black-holes, it is argued, which enjoy a 30,000 times longer interaction time [5].

One does not feel motivated to run for ‘end is nigh’ placards in such fringe discussions, but one can surmise that discussion on such topics of LHC safety assurance are far from the end of their rope in certain circles. Thank you to those involved for their continued discussions.

———————————————–

[1] Attractive Optical Forces from Blackbody Radiation — Sonnleitner, Ritsche-Marte, Ritsch, 2013. ( http://prl.aps.org/abstract/PRL/v111/i2/e023601 )
[2] Terrestrial Flux of Hypothetical Stable MBH Produced in Colliders relative to Natural CR Exposure — 2012. ( http://vixra.org/pdf/1203.0055v2.pdf )
[3] Potential catastrophic risk from metastable quantum-black holes produced at particle colliders — R. Plaga, 2008/2009. ( http://arxiv.org/pdf/0808.1415v3.pdf )
[4] Astrophysical implications of hypothetical stable TeV-scale black holes — Giddings, Mangano — 2008 ( http://arxiv.org/abs/0806.3381 )
[5] Eintein’s Equivalence Principle, C-Global, and the Widely Ignored Factor 30,000 — O.E Rossler, 2013. ( http://eujournal.org/index.php/esj/article/view/1577/1583 )

1) CERN officially attempted to produce ultraslow miniature black holes on earth. It has announced to continue doing so after the current more than a year-long break for upgrading.

2) Miniature black holes possess radically new properties according to published scientific results that go unchallenged in the literature for 5 years: no Hawking evaporation; unchargedness; invisibility to CERN’s detectors; enhanced chance of being produced.

3) Of the millions of miniature black holes hoped to have been produced, at least one is bound to be slow enough to stay inside earth to circulate there.

4) This miniature black hole circulates undisturbed – until it captures the first charged quark. From then on it grows exponentially doubling in size in months at first, later in weeks.

5) As a consequence, after about 100 doublings, earth will start showing manifest signs of “cancer.” And she will – after first losing her atmosphere – die within months to leave nothing but a 2-cm black hole in her wake that still keeps the moon on its course.

6) CERN’s roundabout-way safety argument of 2008, invoking the observed longevity of neutron stars as a guarantee for earth, got falsified on the basis of quantum mechanics in a paper published in mid-2008.

7) CERN’s second roundabout-way safety argument of 2008, invoking the observed longevity of white dwarf stars as a guarantee for earth, likewise got falsified in scientific papers the first of which was published in mid-2008. CERN overlooked the enlarged-cross section principle valid for ultra-slow artificial, compared to ultrafast natural, miniature black holes. The same effect is frighteningly familiar from the slow “cold” neutrons in nuclear fission.

In summary, seven coincidences of “bad luck” were found to cooperate like Macbeth’s fateful 3 witches. CERN decided to accept the blemish of not up-dating its safety report for 5 years so far. Also it steadfastly refuses the safety conference publicly requested on the web on April 18, 2008 (“Honey, I shrunk the earth”). Most significantly, CERN up to this day refuses to heed a Cologne Court’s advice, handed-out to CERN’s representatives standing before it on January the 27th of 2011, to hold a “safety conference.”

Unless there is a safety guarantee that CERN keeps a secret from the whole world while mentioning it only behind closed doors to bring the World Press Council and the UN Security Council to refrain from doing their otherwise inalienable duty, the above-sketched scenario has no parallel in history.

Not a single scientific publication world-wide claims to falsify one of the above-sketched results (points 2–7). Only a very charismatic scientist may be able to call back the media and the mighty behind closed doors. I have a hunch who this could be. But I challenge him to no longer hide so the world can see to whom she owes her hopefully beneficial fate.

Has there ever been a more unsettling story kept from the citizens of this planet?

For J.O.R.

This essay was also published by the Institute for Ethics & Emerging Technologies and by Transhumanity under the title “Is Price Performance the Wrong Measure for a Coming Intelligence Explosion?”.

Introduction

Most thinkers speculating on the coming of an intelligence explosion (whether via Artificial-General-Intelligence or Whole-Brain-Emulation/uploading), such as Ray Kurzweil [1] and Hans Moravec [2], typically use computational price performance as the best measure for an impending intelligence explosion (e.g. Kurzweil’s measure is when enough processing power to satisfy his estimates for basic processing power required to simulate the human brain costs $1,000). However, I think a lurking assumption lies here: that it won’t be much of an explosion unless available to the average person. I present a scenario below that may indicate that the imminence of a coming intelligence-explosion is more impacted by basic processing speed – or instructions per second (ISP), regardless of cost or resource requirements per unit of computation, than it is by computational price performance. This scenario also yields some additional, counter-intuitive conclusions, such as that it may be easier (for a given amount of “effort” or funding) to implement WBE+AGI than it would be to implement AGI alone – or rather that using WBE as a mediator of an increase in the rate of progress in AGI may yield an AGI faster or more efficiently per unit of effort or funding than it would be to implement AGI directly.

Loaded Uploads:

Petascale supercomputers in existence today exceed the processing-power requirements estimated by Kurzweil, Moravec, and Storrs-Hall [3]. If a wealthy individual were uploaded onto an petascale supercomputer today, they would have the same computational resources as the average person would eventually have in 2019 according to Kurzweil’s figures, when computational processing power equal to the human brain, which he estimates at 20 quadrillion calculations per second. While we may not yet have the necessary software to emulate a full human nervous system, the bottleneck for being able to do so is progress in the field or neurobiology rather than software performance in general. What is important is that the raw processing power estimated by some has already been surpassed – and the possibility of creating an upload may not have to wait for drastic increases in computational price performance.

The rate of signal transmission in electronic computers has been estimated to be roughly 1 million times as fast as the signal transmission speed between neurons, which is limited to the rate of passive chemical diffusion. Since the rate of signal transmission equates with subjective perception of time, an upload would presumably experience the passing of time one million times faster than biological humans. If Yudkowsky’s observation [4] that this would be the equivalent to experiencing all of history since Socrates every 18 “real-time” hours is correct then such an emulation would experience 250 subjective years for every hour and 4 years a minute. A day would be equal to 6,000 years, a week would be equal to 1,750 years, and a month would be 75,000 years.

Moreover, these figures use the signal transmission speed of current, electronic paradigms of computation only, and thus the projected increase in signal-transmission speed brought about through the use of alternative computational paradigms, such as 3-dimensional and/or molecular circuitry or Drexler’s nanoscale rod-logic [5], can only be expected to increase such estimates of “subjective speed-up”.

The claim that the subjective perception of time and the “speed of thought” is a function of the signal-transmission speed of the medium or substrate instantiating such thought or facilitating such perception-of-time follows from the scientific-materialist (a.k.a. metaphysical-naturalist) claim that the mind is instantiated by the physical operations of the brain. Thought and perception of time (or the rate at which anything is perceived really) are experiential modalities that constitute a portion of the brain’s cumulative functional modalities. If the functional modalities of the brain are instantiated by the physical operations of the brain, then it follows that increasing the rate at which such physical operations occur would facilitate a corresponding increase in the rate at which such functional modalities would occur, and thus the rate at which the experiential modalities that form a subset of those functional modalities would likewise occur.

Petascale supercomputers have surpassed the rough estimates made by Kurzweil (20 petaflops, or 20 quadrillion calculations per second), Moravec (100,000 MIPS), and others. Most argue that we still need to wait for software improvements to catch up with hardware improvements. Others argue that even if we don’t understand how the operation of the brain’s individual components (e.g. neurons, neural clusters, etc.) converge to create the emergent phenomenon of mind – or even how such components converge so as to create the basic functional modalities of the brain that have nothing to do with subjective experience – we would still be able to create a viable upload. Nick Bostrom & Anders Sandberg, in their 2008 Whole Brain Emulation Roadmap [6] for instance, have argued that if we understand the operational dynamics of the brain’s low-level components, we can then computationally emulate such components and the emergent functional modalities of the brain and the experiential modalities of the mind will emerge therefrom.

Mind Uploading is (Largely) Independent of Software Performance:

Why is this important? Because if we don’t have to understand how the separate functions and operations of the brain’s low-level components converge so as to instantiate the higher-level functions and faculties of brain and mind, then we don’t need to wait for software improvements (or progress in methodological implementation) to catch up with hardware improvements. Note that for the purposes of this essay “software performance” will denote the efficacy of the “methodological implementation” of an AGI or Upload (i.e. designing the mind-in-question, regardless of hardware or “technological implementation” concerns) rather than how optimally software achieves its effect(s) for a given amount of available computational resources.

This means that if the estimates for sufficient processing power to emulate the human brain noted above are correct then a wealthy individual could hypothetically have himself destructively uploaded and run on contemporary petascale computers today, provided that we can simulate the operation of the brain at a small-enough scale (which is easier than simulating components at higher scales; simulating the accurate operation of a single neuron is less complex than simulating the accurate operation of higher-level neural networks or regions). While we may not be able to do so today due to lack of sufficient understanding of the operational dynamics of the brain’s low-level components (and whether the models we currently have are sufficient is an open question), we need wait only for insights from neurobiology, and not for drastic improvements in hardware (if the above estimates for required processing-power are correct), or in software/methodological-implementation.

If emulating the low-level components of the brain (e.g. neurons) will give rise to the emergent mind instantiated thereby, then we don’t actually need to know “how to build a mind” – whereas we do in the case of an AGI (which for the purposes of this essay shall denote AGI not based off of the human or mammalian nervous system, even though an upload might qualify as an AGI according to many people’s definitions). This follows naturally from the conjunction of the premises that 1. the system we wish to emulate already exists and 2. we can create (i.e. computationally emulate) the functional modalities of the whole system by only understanding the operation of the low level-level components’ functional modalities.

Thus, I argue that a wealthy upload who did this could conceivably accelerate the coming of an intelligence explosion by such a large degree that it could occur before computational price performance drops to a point where the basic processing power required for such an emulation is available for a widely-affordable price, say for $1,000 as in Kurzweil’s figures.

Such a scenario could make basic processing power, or Instructions-Per-Second, more indicative of an imminent intelligence explosion or hard take-off scenario than computational price performance.

If we can achieve human whole-brain-emulation even one week before we can achieve AGI (the cognitive architecture of which is not based off of the biological human nervous system) and this upload set to work on creating an AGI, then such an upload would have, according to the “subjective-speed-up” factors given above, 1,750 subjective years within which to succeed in designing and implementing an AGI, for every one real-time week normatively-biological AGI workers have to succeed.

The subjective-perception-of-time speed-up alone would be enough to greatly improve his/her ability to accelerate the coming of an intelligence explosion. Other features, like increased ease-of-self-modification and the ability to make as many copies of himself as he has processing power to allocate to, only increase his potential to accelerate the coming of an intelligence explosion.

This is not to say that we can run an emulation without any software at all. Of course we need software – but we may not need drastic improvements in software, or a reinventing of the wheel in software design

So why should we be able to simulate the human brain without understanding its operational dynamics in exhaustive detail? Are there any other processes or systems amenable to this circumstance, or is the brain unique in this regard?

There is a simple reason for why this claim seems intuitively doubtful. One would expect that we must understand the underlying principles of a given technology’s operation in in order to implement and maintain it. This is, after all, the case for all other technologies throughout the history of humanity. But the human brain is categorically different in this regard because it already exists.

If, for instance, we found a technology and wished to recreate it, we could do so by copying the arrangement of components. But in order to make any changes to it, or any variations on its basic structure or principals-of-operation, we would need to know how to build it, maintain it, and predictively model it with a fair amount of accuracy. In order to make any new changes, we need to know how such changes will affect the operation of the other components – and this requires being able to predictively model the system. If we don’t understand how changes will impact the rest of the system, then we have no reliable means of implementing any changes.

Thus, if we seek only to copy the brain, and not to modify or augment it in any substantial way, the it is wholly unique in the fact that we don’t need to reverse engineer it’s higher-level operations in order to instantiate it.

This approach should be considered a category separate from reverse-engineering. It would indeed involve a form of reverse-engineering on the scale we seek to simulate (e.g. neurons or neural clusters), but it lacks many features of reverse-engineering by virtue of the fact that we don’t need to understand its operation on all scales. For instance, knowing the operational dynamics of the atoms composing a larger system (e.g. any mechanical system) wouldn’t necessarily translate into knowledge of the operational dynamics of its higher-scale components. The approach mind-uploading falls under, where reverse-engineering at a small enough scale is sufficient to recreate it, provided that we don’t seek to modify its internal operation in any significant way, I will call Blind Replication.

Blind replication disallows any sort of significant modifications, because if one doesn’t understand how processes affect other processes within the system then they have no way of knowing how modifications will change other processes and thus the emergent function(s) of the system. We wouldn’t have a way to translate functional/optimization objectives into changes made to the system that would facilitate them. There are also liability issues, in that one wouldn’t know how the system would work in different circumstances, and would have no guarantee of such systems’ safety or their vicarious consequences. So government couldn’t be sure of the reliability of systems made via Blind Replication, and corporations would have no way of optimizing such systems so as to increase a given performance metric in an effort to increase profits, and indeed would be unable to obtain intellectual property rights over a technology that they cannot describe the inner-workings or “operational dynamics” of.

However, government and private industry wouldn’t be motivated by such factors (that is, ability to optimize certain performance measures, or to ascertain liability) in the first place, if they were to attempt something like this – since they wouldn’t be selling it. The only reason I foresee government or industry being interested in attempting this is if a foreign nation or competitor, respectively, initiated such a project, in which case they might attempt it simply to stay competitive in the case of industry and on equal militaristic defensive/offensive footing in the case of government. But the fact that optimization-of-performance-measures and clear liabilities don’t apply to Blind Replication means that a wealthy individual would be more likely to attempt this, because government and industry have much more to lose in terms of liability, were someone to find out.

Could Upload+AGI be easier to implement than AGI alone?

This means that the creation of an intelligence with a subjective perception of time significantly greater than unmodified humans (what might be called Ultra-Fast Intelligence) may be more likely to occur via an upload, rather than an AGI, because the creation of an AGI is largely determined by increases in both computational processing and software performance/capability, whereas the creation of an upload may be determined by-and-large by processing-power and thus remain largely independent of the need for significant improvements in software performance or “methodological implementation”

If the premise that such an upload could significantly accelerate a coming intelligence explosion (whether by using his/her comparative advantages to recursively self-modify his/herself, to accelerate innovation and R&D in computational hardware and/or software, or to create a recursively-self-improving AGI) is taken as true, it follows that even the coming of an AGI-mediated intelligence explosion specifically, despite being impacted by software improvements as well as computational processing power, may be more impacted by basic processing power (e.g. IPS) than by computational price performance — and may be more determined by computational processing power than by processing power + software improvements. This is only because uploading is likely to be largely independent of increases in software (i.e. methodological as opposed to technological) performance. Moreover, development in AGI may proceed faster via the vicarious method outlined here – namely having an upload or team of uploads work on the software and/or hardware improvements that AGI relies on – than by directly working on such improvements in “real-time” physicality.

Virtual Advantage:

The increase in subjective perception of time alone (if Yudkowsky’s estimate is correct, a ratio of 250 subjective years for every “real-time” hour) gives him/her a massive advantage. It also would likely allow them to counter-act and negate any attempts made from “real-time” physicality to stop, slow or otherwise deter them.

There is another feature of virtual embodiment that could increase the upload’s ability to accelerate such developments. Neural modification, with which he could optimize his current functional modalities (e.g. what we coarsely call “intelligence”) or increase the metrics underlying them, thus amplifying his existing skills and cognitive faculties (as in Intelligence Amplification or IA), as well as creating categorically new functional modalities, is much easier from within virtual embodiment than it would be in physicality. In virtual embodiment, all such modifications become a methodological, rather than technological, problem. To enact such changes in a physically-embodied nervous system would require designing a system to implement those changes, and actually implementing them according to plan. To enact such changes in a virtually-embodied nervous system requires only a re-organization or re-writing of information. Moreover, in virtual embodiment, any changes could be made, and reversed, whereas in physical embodiment reversing such changes would require, again, designing a method and system of implementing such “reversal-changes” in physicality (thereby necessitating a whole host of other technologies and methodologies) – and if those changes made further unexpected changes, and we can’t easily reverse them, then we may create an infinite regress of changes, wherein changes made to reverse a given modification in turn creates more changes, that in turn need to be reversed, ad infinitum.

Thus self-modification (and especially recursive self-modification), towards the purpose of intelligence amplification into Ultraintelligence [7] in easier (i.e. necessitating a smaller technological and methodological infrastructure – that is, the required host of methods and technologies needed by something – and thus less cost as well) in virtual embodiment than in physical embodiment.

These recursive modifications not only further maximize the upload’s ability to think of ways to accelerate the coming of an intelligence explosion, but also maximize his ability to further self-modify towards that very objective (thus creating the positive feedback loop critical for I.J Good’s intelligence explosion hypothesis) – or in other words maximize his ability to maximize his general ability in anything.

But to what extent is the ability to self-modify hampered by the critical feature of Blind Replication mentioned above – namely, the inability to modify and optimize various performance measures by virtue of the fact that we can’t predictively model the operational dynamics of the system-in-question? Well, an upload could copy himself, enact any modifications, and see the results – or indeed, make a copy to perform this change-and-check procedure. If the inability to predictively model a system made through the “Blind Replication” method does indeed problematize the upload’s ability to self-modify, it would still be much easier to work towards being able to predictively model it, via this iterative change-and-check method, due to both the subjective-perception-of-time speedup and the ability to make copies of himself.

It is worth noting that it might be possible to predictively model (and thus make reliable or stable changes to) the operation of neurons, without being able to model how this scales up to the operational dynamics of the higher-level neural regions. Thus modifying, increasing or optimizing existing functional modalities (i.e. increasing synaptic density in neurons, or increasing the range of usable neurotransmitters — thus increasing the potential information density in a given signal or synaptic-transmission) may be significantly easier than creating categorically new functional modalities.

Increasing the Imminence of an Intelligent Explosion:

So what ways could the upload use his/her new advantages and abilities to actually accelerate the coming of an intelligence explosion? He could apply his abilities to self-modification, or to the creation of a Seed-AI (or more technically a recursively self-modifying AI).

He could also accelerate its imminence vicariously by working on accelerating the foundational technologies and methodologies (or in other words the technological and methodological infrastructure of an intelligence explosion) that largely determine its imminence. He could apply his new abilities and advantages to designing better computational paradigms, new methodologies within existing paradigms (e.g. non-Von-Neumann architectures still within the paradigm of electrical computation), or to differential technological development in “real-time” physicality towards such aims – e.g. finding an innovative means of allocating assets and resources (i.e. capital) to R&D for new computational paradigms, or optimizing current computational paradigms.

Thus there are numerous methods of indirectly increasing the imminence (or the likelihood of imminence within a certain time-range, which is a measure with less ambiguity) of a coming intelligence explosion – and many new ones no doubt that will be realized only once such an upload acquires such advantages and abilities.

Intimations of Implications:

So… Is this good news or bad news? Like much else in this increasingly future-dominated age, the consequences of this scenario remain morally ambiguous. It could be both bad and good news. But the answer to this question is independent of the premises – that is, two can agree on the viability of the premises and reasoning of the scenario, while drawing opposite conclusions in terms of whether it is good or bad news.

People who subscribe to the “Friendly AI” camp of AI-related existential risk will be at once hopeful and dismayed. While it might increase their ability to create their AGI (or more technically their Coherent-Extrapolated-Volition Engine [8]), thus decreasing the chances of an “unfriendly” AI being created in the interim, they will also be dismayed by the fact that it may include (but not necessitate) a recursively-modifying intelligence, in this case an upload, to be created prior to the creation of their own AGI – which is the very problem they are trying to mitigate in the first place.

Those who, like me, see a distributed intelligence explosion (in which all intelligences are allowed to recursively self-modify at the same rate – thus preserving “power” equality, or at least mitigating “power” disparity [where power is defined as the capacity to affect change in the world or society] – and in which any intelligence increasing their capably at a faster rate than all others is disallowed) as a better method of mitigating the existential risk entailed by an intelligence explosion will also be dismayed. This scenario would allow one single person to essentially have the power to determine the fate of humanity – due to his massively increased “capability” or “power” – which is the very feature (capability disparity/inequality) that the “distributed intelligence explosion” camp of AI-related existential risk seeks to minimize.

On the other hand, those who see great potential in an intelligence explosion to help mitigate existing problems afflicting humanity – e.g. death, disease, societal instability, etc. – will be hopeful because the scenario could decrease the time it takes to implement an intelligence explosion.

I for one think that it is highly likely that the advantages proffered by accelerating the coming of an intelligence explosion fail to supersede the disadvantages incurred by the increase existential risk it would entail. That is, I think that the increase in existential risk brought about by putting so much “power” or “capability-to-affect-change” in the (hands?) one intelligence outweighs the decrease in existential risk brought about by the accelerated creation of an Existential-Risk-Mitigating A(G)I.

Conclusion:

Thus, the scenario presented above yields some interesting and counter-intuitive conclusions:

  1. How imminent an intelligence explosion is, or how likely it is to occur within a given time-frame, may be more determined by basic processing power than by computational price performance, which is a measure of basic processing power per unit of cost. This is because as soon as we have enough processing power to emulate a human nervous system, provided we have sufficient software to emulate the lower level neural components giving rise to the higher-level human mind, then the increase in the rate of thought and subjective perception of time made available to that emulation could very well allow it to design and implement an AGI before computational price performance increases by a large enough factor to make the processing power necessary for that AGI’s implementation available for a widely-affordable cost. This conclusion is independent of any specific estimates of how long the successful computational emulation of a human nervous system will take to achieve. It relies solely on the premise that the successful computational emulation of the human mind can be achieved faster than the successful implementation of an AGI whose design is not based upon the cognitive architecture of the human nervous system. I have outlined various reasons why we might expect this to be the case. This would be true even if uploading could only be achieved faster than AGI (given an equal amount of funding or “effort”) by a seemingly-negligible amount of time, like one week, due to the massive increase in speed of thought and the rate of subjective perception of time that would then be available to such an upload.
  2. The creation of an upload may be relatively independent of software performance/capability (which is not to say that we don’t need any software, because we do, but rather that we don’t need significant increases in software performance or improvements in methodological implementation – i.e. how we actually design a mind, rather than the substrate it is instantiated by – which we do need in order to implement an AGI and which we would need for WBE, were the system we seek to emulate not already in existence) and may in fact be largely determined by processing power or computational performance/capability alone, whereas AGI is dependent on increases in both computational performance and software performance or fundamental progress in methodological implementation.
    • If this second conclusion is true, it means that an upload may be possible quite soon considering the fact that we’ve passed the basic estimates for processing requirements given by Kurzweil, Moravec and Storrs-Hall, provided we can emulate the low-level neural regions of the brain with high predictive accuracy (and provided the claim that instantiating such low-level components will vicariously instantiate the emergent human mind, without out needing to really understand how such components functionally-converge to do so, proves true), whereas AGI may still have to wait for fundamental improvements to methodological implementation or “software performance”
    • Thus it may be easier to create an AGI by first creating an upload to accelerate the development of that AGI’s creation, than it would be to work on the development of an AGI directly. Upload+AGI may actually be easier to implement than AGI alone is!

franco 2 essay 5

References:

[1] Kurzweil, R, 2005. The Singularity is Near. Penguin Books.

[2] Moravec, H, 1997. When will computer hardware match the human brain?. Journal of Evolution and Technology, [Online]. 1(1). Available at: http://www.jetpress.org/volume1/moravec.htm [Accessed 01 March 2013].

[3] Hall, J (2006) “Runaway Artificial Intelligence?” Available at: http://www.kurzweilai.net/runaway-artificial-intelligence [Accessed: 01 March 2013]

[4] Adam Ford. (2011). Yudkowsky vs Hanson on the Intelligence Explosion — Jane Street Debate 2011 . [Online Video]. August 10, 2011. Available at: http://www.youtube.com/watch?v=m_R5Z4_khNw [Accessed: 01 March 2013].

[5] Drexler, K.E, (1989). MOLECULAR MANIPULATION and MOLECULAR COMPUTATION. In NanoCon Northwest regional nanotechnology conference. Seattle, Washington, February 14–17. NANOCON. 2. http://www.halcyon.com/nanojbl/NanoConProc/nanocon2.html [Accessed 01 March 2013]

[6] Sandberg, A. & Bostrom, N. (2008). Whole Brain Emulation: A Roadmap, Technical Report #2008–3. http://www.philosophy.ox.ac.uk/__data/assets/pdf_file/0019/3853/brain-emulation-roadmap-report.pdf [Accessed 01 March 2013]

[7] Good, I.J. (1965). Speculations Concerning the First Ultraintelligent Machine. Advances in Computers.

[8] Yudkowsky, E. (2004). Coherent Extrapolated Volition. The Singularity Institute.