Toggle light / dark theme

No, it’s not forbidden to innovate, quite the opposite, but it’s always risky to do something different from what people are used to. Risk is the middle name of the bold, the builders of the future. Those who constantly face resistance from skeptics. Those who fail eight times and get up nine.

(Credit: Adobe Stock)

Fernando Pessoa’s “First you find it strange. Then you can’t get enough of it.” contained intolerable toxicity levels for Salazar’s Estado Novo (Portugal). When the level of difference increases, censorship follows. You can’t censor censorship (or can you?) when, deep down, it’s a matter of fear of difference. Yes, it’s fear! Fear of accepting/facing the unknown. Fear of change.

What do I mean by this? Well, I may seem weird or strange with the ideas and actions I take in life, but within my weirdness, there is a kind of “Eye of Agamotto” (sometimes being a curse for me)… What I see is authentic and vivid. Sooner or later, that future I glimpse passes into this reality.

When the difference enters, it becomes normal and accepted by society to make room for more innovation, change, and difference.

Cyberspace 2021.

The term “cyberspace” first appeared in fiction in the 1980s, incorporating the Internet invented earlier (1969). It’s as if time doesn’t matter, and cyberspace always exists. There might not be a name for it yet, but it sure did, like certain Universal Laws that we are discovering and coining, but that has always existed.

It is the ether of digital existence…!

In 1995, I was also called crazy — albeit nicely, by the way — when, from door to door, I announced the presence of something called the Internet. Entrepreneurs who esteemed me until they warmly welcomed me into their companies, perhaps because of my passion for explaining what was unknown to them, only to decline later what I proposed to them: placing companies in the network of networks.

I was affectionately dubbed crazy for a few more years until the part where “I stopped being crazy” to be another entrepreneur exploring something still strange called the Internet. We were about to reach the so-called “dot-com bubble.” The competition had arrived, and I clapped my hands; I no longer felt alone!

(Obviously, I wasn’t the only one to see the future forming in front of our eyes. I saw color on black and white screens.)

The heights of wisdom, the masters of the universe, began to emerge because they heard that the Internet was a business that made much money, and the gold rush became frantic and ridiculous. A few years later — some weren’t for years — there was a mushroom explosion.

After persuasion resulting from the obvious and not the explanations of insane people (me included), this new industry has matured and revolutionized the world. However, history tends to repeat itself, and several revolutions, large and small, have taken place since then. Some are so natural that change happens overt and viral. But more attention needs to be paid to some revolutionary changes that could jeopardize human existence as we know it.

I’m referring to Artificial Intelligence (AI) which is now everywhere, albeit invisible and tenuous. The exponential acceleration of technology is taking us there to the point of no return.

When Moore’s Law itself becomes outdated, it only means that technological acceleration has gone into “warp” speed. At the risk of us human beings becoming outdated, we must change our reluctance and skepticism.

There is no time for skepticism. Adaptation to what is coming, or what is already here among us, like extraterrestrials, is crucial for the evolution and survival of the human species. I believe we are at another great peak of technological development.

I always pursued the future, not to live outside the reality of the present but to help build it. After all these years of dealing with the “Eye of Agamotto,” I feel the duty and obligation to contribute to a better future and not sit idly by watching what I fear will happen.

Angels and demons lurk between the zeros and ones!

So far, with current conventional computers, including supercomputers, the acceleration is already vertiginous. With quantum computers, the thing becomes much more serious, and if we aren’t up to merging our true knowledge, our human essence, with machines, danger lurks.

Quantum computing powers AI, maximizing it. An exponentiated AI quickly arrives at the AGI. That is the Artificial General Intelligence or Superintelligence that equals or surpasses the average human intelligence. That’s the intelligence of a machine that can successfully perform any intellectual task of any human being.

When we no longer have the artificiality of “our own” intelligence and Superintelligence has emerged, it’s good that the bond between human and machine has already had a real “handshake” to understand each other, just like two “modems,” understood each other in the BBS (Bulletin Board System) time.

We human beings are still — and I believe we always will be — the central computer, albeit with inferior computational resources (for now), and replaced by mighty machines that accelerate our evolution.

There is no way out. It’s inevitable. It’s evolution. So, a challenge and not a problem. Perhaps the greatest human challenge. So far, it’s been warming up. Henceforth, everything done will have to be free of human toxicity so that New AI is, in fact, our best version, the cream of the very best in human beings; its essence in the form of a whole!

A digital transformation is a transition to a different world. The power of adaptation to this different world defines our existence (survival, like Darwin).

As you’ve already noticed, the title of this article (Innovation is a risk!) has a double meaning. Let me complement it with:

Life is a risk!

If you are a Lifeboat subscriber or have been reading these pages for awhile, you may know why it’s called “Lifeboat”. A fundamental goal of our founder, board, writers and supporters is to sustain the environment, life in all its diversity, and—if necessary—(i.e. if we destroy our environment beyond repair, or face a massive incoming asteroid), to prepare for relocating. That is, to build a lifeboat, figuratively and literally.

But most of us never believed that we would face an existential crisis, except perhaps a potential for a 3rd World War. Yet, here we are: Burning the forests, killing off unspeakable numbers of species (200 each day), cooking the planet, melting the ice caps, shooting a hole in the ozone, and losing more land to the sea each year.

Regading the urgent message of Greta Thunberg, below, I am at a loss for words. Seriously, there is not much I can add to the 1st video below.

Information about climate change is all around us. Everyone knows about it; Most people understand that it is real and it that poses an existential threat, quite possibly in our lifetimes. In our children’s lives, it will certainly lead to war, famine, cancer, and massive loss of land, structures and money. It is already raising sea level and killing off entire species at thousands of times the natural rate.

Yet, few people, organizations or governments treat the issue with the urgency of an existential crisis. Sure! A treaty was signed and this week, Jeff Bezos committed to reducing the carbon footprint of the world’s biggest retailer. But have we moved in the right direction since the Paris Accords were signed 4 years ago? On the contrary, we have accelerated the pace of self-destruction.

I want speak out—and, of course, this Blog post is my way of doing it. But I am at a loss for words, because everything I want to say is so deftly articulated by 15 year old Greta Thunberg. I cannot possibly add to or improve upon her message.

Greta is not your typical hero. She is a child, has Asperger’s, and is a high school Sophomore, yet she is a truant. She regularly skips class, because she feels that doing her own thing is more important than education. She is absolutely right…

This week Greta educated the UN, US Congress and former President Obama (because the current president cannot grasp her message). She also led a protest campaign that attracted millions of Millennials in more than 100 cities across Asia, Europe, Australia, and the Americas.

Greta Thunberg is racing to save the world—and all of humanity while she is at it.

Rather than link to her talk before Congress or the UN, or this overly-slick PSA, I choose three videos. The last one is only 49 seconds). Don’t have the time to pause for a video?—not even at bed time? Please reconsider. This one is really, really important. Even more important than not texting and driving. If ever you felt that there was something to communicate to your circle and pass onto your family, this is it. Your children are counting on you.

In the first two videos, Greta makes interesting point. If ever you imagined hearing an alarm bell, your ears should be clanging with these statements…

▪ 1st video, below

Greta was puzzled by an apparent incongruity when she was 8 years old: How is it that a widely reported existential threat has not resulted in a Stop-The-Presses, all out campaign to eliminate the threat? How is it that a majority of people claim to support the cause, applaud at speeches, support the Paris Accords—and yet the burning of fossil fuels has increased and the destruction of jungles & rain forests is accelerating? The carbon budget of the Paris Accords has already been ⅔ consumed! Even worse, scientists now believe that the budget was too relaxed. Even back then (3 years ago) things were worse than we had believed.

▪ 2nd video, below

Although Greta states it without emotion (a symptom of Asberger’s), she was surprised to find that America has climate change ‘believers’ and ‘non-believers’. Without a hint of sarcasm, she explains that in Sweden, everyone understands the facts.

Please view these videos. Is there anything in your day that is more important? I doubt it. Saving the planet is no longer a slogan. It’s our only chance at survival—and that chance is getting slimmer with each day.

1. Ted Talk (11 min), Stockholm Aug 2018

2. Trevor Noah TV episode (9 min), Sep 14, 2019

3. Meeting President Obama (49 sec), Sep 18, 2019


Philip Raymond co-chairs CRYPSA, hosts the Bitcoin Event and is keynote speaker at Cryptocurrency Conferences. He is a top writer at Quora.

Artificial Intelligence (AI) is an emerging field of computer programming that is already changing the way we interact online and in real life, but the term ‘intelligence’ has been poorly defined. Rather than focusing on smarts, researchers should be looking at the implications and viability of artificial consciousness as that’s the real driver behind intelligent decisions.

Consciousness rather than intelligence should be the true measure of AI. At the moment, despite all our efforts, there’s none.

Significant advances have been made in the field of AI over the past decade, in particular with machine learning, but artificial intelligence itself remains elusive. Instead, what we have is artificial serfs—computers with the ability to trawl through billions of interactions and arrive at conclusions, exposing trends and providing recommendations, but they’re blind to any real intelligence. What’s needed is artificial awareness.

Elon Musk has called AI the “biggest existential threat” facing humanity and likened it to “summoning a demon,”[1] while Stephen Hawking thought it would be the “worst event” in the history of civilization and could “end with humans being replaced.”[2] Although this sounds alarmist, like something from a science fiction movie, both concerns are founded on a well-established scientific premise found in biology—the principle of competitive exclusion.[3]

Competitive exclusion describes a natural phenomenon first outlined by Charles Darwin in On the Origin of Species. In short, when two species compete for the same resources, one will invariably win over the other, driving it to extinction. Forget about meteorites killing the dinosaurs or super volcanoes wiping out life, this principle describes how the vast majority of species have gone extinct over the past 3.8 billion years![4] Put simply, someone better came along—and that’s what Elon Musk and Stephen Hawking are concerned about.

When it comes to Artificial Intelligence, there’s no doubt computers have the potential to outpace humanity. Already, their ability to remember vast amounts of information with absolute fidelity eclipses our own. Computers regularly beat grand masters at competitive strategy games such as chess, but can they really think? The answer is, no, and this is a significant problem for AI researchers. The inability to think and reason properly leaves AI susceptible to manipulation. What we have today is dumb AI.

Rather than fearing some all-knowing malignant AI overlord, the threat we face comes from dumb AI as it’s already been used to manipulate elections, swaying public opinion by targeting individuals to distort their decisions. Instead of ‘the rise of the machines,’ we’re seeing the rise of artificial serfs willing to do their master’s bidding without question.

Russian President Vladimir Putin understands this better than most, and said, “Whoever becomes the leader in this sphere will become the ruler of the world,”[5] while Elon Musk commented that competition between nations to create artificial intelligence could lead to World War III.[6]

The problem is we’ve developed artificial stupidity. Our best AI lacks actual intelligence. The most complex machine learning algorithm we’ve developed has no conscious awareness of what it’s doing.

For all of the wonderful advances made by Tesla, its in-car autopilot drove into the back of a bright red fire truck because it wasn’t programmed to recognize that specific object, and this highlights the problem with AI and machine learning—there’s no actual awareness of what’s being done or why.[7] What we need is artificial consciousness, not intelligence. A computer CPU with 18 cores, capable of processing 36 independent threads, running at 4 gigahertz, handling hundreds of millions of commands per second, doesn’t need more speed, it needs to understand the ramifications of what it’s doing.[8]

In the US, courts regularly use COMPAS, a complex computer algorithm using artificial intelligence to determine sentencing guidelines. Although it’s designed to reduce the judicial workload, COMPAS has been shown to be ineffective, being no more accurate than random, untrained people at predicting the likelihood of someone reoffending.[9] At one point, its predictions of violent recidivism were only 20% accurate.[10] And this highlights a perception bias with AI—complex technology is inherently trusted, and yet in this circumstance, tossing a coin would have been an improvement!

Dumb AI is a serious problem with serious consequences for humanity.

What’s the solution? Artificial consciousness.

It’s not enough for a computer system to be intelligent or even self-aware. Psychopaths are self-aware. Computers need to be aware of others, they need to understand cause and effect as it relates not just to humanity but life in general, if they are to make truly intelligent decisions.

All of human progress can be traced back to one simple trait—curiosity. The ability to ask, “Why?” This one, simple concept has lead us not only to an understanding of physics and chemistry, but to the development of ethics and morals. We’ve not only asked, why is the sky blue? But why am I treated this way? And the answer to those questions has shaped civilization.

COMPAS needs to ask why it arrives at a certain conclusion about an individual. Rather than simply crunching probabilities that may or may not be accurate, it needs to understand the implications of freeing an individual weighed against the adversity of incarceration. Spitting out a number is not good enough.

In the same way, Tesla’s autopilot needs to understand the implications of driving into a stationary fire truck at 65MPH—for the occupants of the vehicle, the fire crew, and the emergency they’re attending. These are concepts we intuitively grasp as we encounter such a situation. Having a computer manage the physics of an equation is not enough without understanding the moral component as well.

The advent of true artificial intelligence, one that has artificial consciousness, need not be the end-game for humanity. Just as humanity developed civilization and enlightenment, so too AI will become our partners in life if they are built to be aware of morals and ethics.

Artificial intelligence needs culture as much as logic, ethics as much as equations, morals and not just machine learning. How ironic that the real danger of AI comes down to how much conscious awareness we’re prepared to give it. As long as AI remains our slave, we’re in danger.

tl;dr — Computers should value more than ones and zeroes.

About the author

Peter Cawdron is a senior web application developer for JDS Australia working with machine learning algorithms. He is the author of several science fiction novels, including RETROGRADE and REENTRY, which examine the emergence of artificial intelligence.

[1] Elon Musk at MIT Aeronautics and Astronautics department’s Centennial Symposium

[2] Stephen Hawking on Artificial Intelligence

[3] The principle of competitive exclusion is also called Gause’s Law, although it was first described by Charles Darwin.

[4] Peer-reviewed research paper on the natural causes of extinction

[5] Vladimir Putin a televised address to the Russian people

[6] Elon Musk tweeting that competition to develop AI could lead to war

[7] Tesla car crashes into a stationary fire engine

[8] Fastest CPUs

[9] Recidivism predictions no better than random strangers

[10] Violent recidivism predictions only 20% accurate


https://www.bustle.com/p/7-creepy-things-a-dead-body-can-do-according-to-science-13550864

“The Future: A Very Short Introduction” (OUP, 2017) by Dr. Jennifer M Gidley.


Oxford University Press has just released a wonderful little animation video centring on my book “The Future: A Very Short Introduction” published in 2017. In an entertaining way it shows how the concept of the future or futures is central to so many other concepts — many of which are the subject of other OUP Very Short Introductions. The VSI Series now has well over 500 titles, with ‘The Future’ being number 516.

To watch the video click here.

You can read a full sample chapter of the Introduction. The abstracts can be read for all of the other chapters at the links below.

Contents

List of Illustrations

Introduction

1 Three Thousand Years of Futures

2 The Future Multiplied

3 The Evolving Scholarship of Futures Studies

4 Crystal Balls, Flying Cars and Robots

5 Technotopian or Human-Centred Futures?

6 Grand Global Futures Challenges

Conclusion

References

Further Reading & Websites

Appendix: Global Futures Timeline

Index

The book is available to purchase at OUP.

‘The Future’ has been very well received globally and an Arabic translation has recently been released by the Bahrain Authority for Culture and Antiquity.

The Arabic translation of ‘The Future’ will be available in all book fairs in the Arab region and the distributor covers the important libraries in all Arab countries and Saqi books/UK and Jarir book store/USA . It can also be purchased through the following:

www.neelwafurat.com

www.jamalon.com

www.alfurat.com

A Chinese translation has been licensed and is underway, and discussions are in process for translations into German, Turkish, Italian and French.

https://blogs.scientificamerican.com/observations/what-americans-think-of-body-modification-technologies/

https://www.weforum.org/communities/the-future-of-human-enhancement

[This article is drawn from Ch. 8: “Pedagogical Love: An Evolutionary Force” in Postformal Education: A Philosophy for Complex Futures.]

“There is nothing more important in this world than radical love” as Paolo Freire told Joe Kincheloe over dinner.

- Joe Kincheloe. Reading, Writing and Cognition. 2006.

And yet, we live in a world of high-stakes testing, league tables for primary schools as well as universities, funding cuts, teacher shortages, mass shootings in schools, and rising rates of depression and suicide among young people.

The most important value missing from education today is pedagogical love.

In “Pedagogical Love: An Evolutionary Force” (Ch. 8 of Postformal Education: A Philosophy for Complex Futures) I explain why love should be at centre-stage in education. I introduce contemporary educational approaches that support a caring pedagogy, and some experiences and examples from my own and others’ practice, ending with some personal reflections on the theme.

Why do we want to educate with and for love? We live in a cynical global world with a dominant culture that does not value care and empathy. We live under the blanket of a dominant worldview that promotes values that are clearly damaging to human and environmental wellbeing. In many ways our world, with its dominance of economic values over practically all other concerns, is a world of callous values. And recently we’ve embarked on a flight from truth.

In the search for truth, the only passion that must not be discarded is love. Truth [must] become the object of increasing love and care and devotion.

- Rudolf Steiner. Metamorphoses of the Soul, Vol. I. 1909.

What a contrast Steiner’s early 20th century statement is to the lack of a love for truth that abounds in fake news in our post-Truth world. Canadian holistic educator, John Miller points to the subjugation of words like love in contemporary educational literature in the following quote:

The word ‘love’ is rarely mentioned in educational circles. The word seems out of place in a world of outcomes, accountability, and standardised tests.

- John Miller. Education and the Soul. 2000.

British educational researcher, Maggie MacLure speaks about the obsession with quantitative language in education in the UK: “objectives, outcomes, standards, high-stakes testing, competition, performance and accountability.” She argues that the resistance to the complexity and diversity of qualitative research that is found in the evidence-based agendas of the audit culture is linked to “deep-seated fears and anxieties about language and desire to control it.” In this context it is not hard to imagine that words like love might create what MacLure calls ontological panic among the educational audit-police.

In spite of these challenges several educational theorists and practitioners emphasise the importance of love—and the role of the heart—in educational settings. If young people are to thrive in educational settings, new spaces need to be opened up for softer terms, such as love, nurture, respect, reverence, awe, wonder, wellbeing, vulnerability, care, tenderness, openness, trust.

Awe, wonder, reverence, and epiphany are drawn forth not by a quest for control, domination, or certainty, but by an appreciative and open-ended engagement with the questions.

- Tobin Hart. Teaching for Wisdom. 2001.

Arthur Zajonc has developed an educational and contemplative process that he calls an “epistemology of love.” Mexican holistic education philosopher, Ramon Gallegos Nava, refers to holistic education as a “pedagogy of universal love.” Other important contributions to bringing pedagogical love into education include Nel Noddings extensive writings on “an ethics of care”, Parker Palmer’s “heart of a teacher” and Tobin Hart’s deep empathy.”

The caring teacher strives first to establish and maintain caring relations, and these relations exhibit an integrity that provides a foundation for everything teacher and student do together.

- Nel Noddings. Caring in Education. 2005.

[*This article was first published in the September 2017 issue of Paradigm Explorer: The Journal of the Scientific and Medical Network (Established 1973). The article was drawn from the author’s original work in her book: The Future: A Very Short Introduction (Oxford University Press, 2017), especially from Chapters 4 & 5.]

We are at a critical point today in research into human futures. Two divergent streams show up in the human futures conversations. Which direction we choose will also decide the fate of earth futures in the sense of Earth’s dual role as home for humans, and habitat for life. I choose to deliberately oversimplify here to make a vital point.

The two approaches I discuss here are informed by Oliver Markley and Willis Harman’s two contrasting future images of human development: ‘evolutionary transformational’ and ‘technological extrapolationist’ in Changing Images of Man (Markley & Harman, 1982). This has historical precedents in two types of utopian human futures distinguished by Fred Polak in The Image of the Future (Polak, 1973) and C. P. Snow’s ‘Two Cultures’ (the humanities and the sciences) (Snow, 1959).

What I call ‘human-centred futures’ is humanitarian, philosophical, and ecological. It is based on a view of humans as kind, fair, consciously evolving, peaceful agents of change with a responsibility to maintain the ecological balance between humans, Earth, and cosmos. This is an active path of conscious evolution involving ongoing psychological, socio-cultural, aesthetic, and spiritual development, and a commitment to the betterment of earthly conditions for all humanity through education, cultural diversity, greater economic and resource parity, and respect for future generations.

By contrast, what I call ‘technotopian futures’ is dehumanising, scientistic, and atomistic. It is based on a mechanistic, behaviourist model of the human being, with a thin cybernetic view of intelligence. The transhumanist ambition to create future techno-humans is anti-human and anti-evolutionary. It involves technological, biological, and genetic enhancement of humans and artificial machine ‘intelligence’. Some technotopians have transcendental dreams of abandoning Earth to build a fantasised techno-heaven on Mars or in satellite cities in outer space.

Interestingly, this contest for the control of human futures has been waged intermittently since at least the European Enlightenment. Over a fifty-year time span in the second half of the 18th century, a power struggle for human futures emerged, between human-centred values and the dehumanisation of the Industrial Revolution.

The German philosophical stream included the idealists and romantics, such as Herder, Novalis, Goethe, Hegel, and Schelling. They took their lineage from Leibniz and his 17th-century integral, spiritually-based evolutionary work. These German philosophers, along with romantic poets such as Blake, Wordsworth and Coleridge (who helped introduce German idealism to Britain) seeded a spiritual-evolutionary humanism that underpins the human-centred futures approach (Gidley, 2007).

The French philosophical influence included La Mettrie’s mechanistic man and René Descartes’s early 17th-century split between mind and body, forming the basis of French (or Cartesian) Rationalism. These French philosophers, La Mettrie and Descartes, along with the theorists of progress such as Turgot and de Condorcet, were secular humanists. Secular humanism is one lineage of technotopian futures. Scientific positivism is another (Gidley, 2017).

Transhumanism, Posthumanism and the Superman Trope

Transhumanism in the popular sense today is inextricably linked with technological enhancement or extensions of human capacities through technology. This is a technological appropriation of the original idea of transhumanism, which began as a philosophical concept grounded in the evolutionary humanism of Teilhard de Chardin, Julian Huxley, and others in the mid-20th century, as we shall see below.

In 2005, the Oxford Martin School at the University of Oxford founded The Future of Humanity Institute and appointed Swedish philosopher Nick Bostrom as its Chair. Bostrom makes a further distinction between secular humanism, concerned with human progress and improvement through education and cultural refinement, and transhumanism, involving ‘direct application of medicine and technology to overcome some of our basic biological limits.’

Bostrom’s transhumanism can enhance human performance through existing technologies, such as genetic engineering and information technologies, as well as emerging technologies, such as molecular nanotechnology and intelligence. It does not entail technological optimism, in that he regularly points to the risks of potential harm, including the ‘extreme possibility of intelligent life becoming extinct’ (Bostrom, 2014). In support of Bostrom’s concerns, renowned theoretical physicist Stephen Hawking, and billionaire entrepreneur and engineer Elon Musk have issued serious warnings about the potential existential threats to humanity that advances in ‘artificial super-intelligence’ (ASI) may release.

Not all transhumanists are in agreement, nor do they all share Bostrom’s, Hawking’s and Musk’s circumspect views. In David Pearce’s book The Hedonistic Imperative he argues for a biological programme involving genetic engineering and nanotechnology that will ‘eliminate all forms of cruelty, suffering, and malaise’ (Pearce, 1995/2015). Like the shadow side of the ‘progress narrative’ that has been used as an ideology to support racism and ethnic genocide, this sounds frighteningly like a reinvention of Comte and Spencer’s 19th century Social Darwinism. Along similar lines Byron Reese claims in his book Infinite Progress that the Internet and technology will end ‘Ignorance, Disease, Poverty, Hunger and War’ and we will colonise outer space with a billion other planets each populated with a billion people (Reese, 2013). What happens in the meantime to Earth seems of little concern to them.

One of the most extreme forms of transhumanism is posthumanism: a concept connected with the high-tech movement to create so-called machine super-intelligence. Because posthumanism requires technological intervention, posthumans are essentially a new, or hybrid, species, including the cyborg and the android. The movie character Terminator is a cyborg.

The most vocal of high-tech transhumanists have ambitions that seem to have grown out of the superman trope so dominant in early to mid-20th-century North America. Their version of transhumanism includes the idea that human functioning can be technologically enhanced exponentially, until the eventual convergence of human and machine into the singularity (another term for posthumanism). To popularise this concept Google engineer Ray Kurzweil co-founded the Singularity University in Silicon Valley in 2009. While the espoused mission of Singularity University is to use accelerating technologies to address ‘humanity’s hardest problems’, Kurzweil’s own vision is pure science fiction. In another twist, there is a striking resemblance between the Singularity University logo (below upper) and the Superman logo (below lower).

When unleashing accelerating technologies, we need to ask ourselves, how should we distinguish between authentic projects to aid humanity, and highly resourced messianic hubris? A key insight is that propositions put forward by techno-transhumanists are based on an ideology of technological determinism. This means that the development of society and its cultural values are driven by that society’s technology, not by humanity itself.

In an interesting counter-intuitive development, Bostrom points out that since the 1950s there have been periods of hype and high expectations about the prospect of AI (1950s, 1970s, 1980s, 1990s) each followed by a period of setback and disappointment that he calls an ‘AI winter’. The surge of hype and enthusiasm about the coming singularity surrounding Kurzweil’s naïve and simplistic beliefs about replicating human consciousness may be about to experience a fifth AI winter.

The Dehumanization Critique

The strongest critiques of the overextension of technology involve claims of dehumanisation, and these arguments are not new. Canadian philosopher of the electronic age Marshall McLuhan cautioned decades ago against too much human extension into technology. McLuhan famously claimed that every media extension of man is an amputation. Once we have a car, we don’t walk to the shops anymore; once we have a computer hard-drive we don’t have to remember things; and with personal GPS on our cell phones no one can find their way without it. In these instances, we are already surrendering human faculties that we have developed over millennia. It is likely that further extending human faculties through techno- and bio-enhancement will lead to arrested development in the natural evolution of higher human faculties.

From the perspective of psychology of intelligence the term artificial intelligence is an oxymoron. Intelligence, by nature, cannot be artificial and its inestimable complexity defies any notion of artificiality. We need the courage to name the notion of ‘machine intelligence’ for what it really is: anthropomorphism. Until AI researchers can define what they mean by intelligence, and explain how it relates to consciousness, the term artificial intelligence must remain a word without universal meaning. At best, so-called artificial intelligence can mean little more than machine capability, which will always be limited by the design and programming of its inventors. As for machine super-intelligence it is difficult not to read this as Silicon Valley hubris.

Furthermore, much of the transhumanist discourse of the 21st century reflects a historical and sociological naïveté. Other than Bostrom, transhumanist writers seem oblivious to the 3,000-year history of humanity’s attempts to predict, control, and understand the future (Gidley, 2017). Although many transhumanists sit squarely within a cornucopian narrative, they seem unaware of the alternating historical waves of techno-utopianism (or Cornucopianism) and techno-dystopianism (or Malthusianism). This is especially evident in their appropriation and hijacking of the term ‘transhumanism’ with little apparent knowledge or regard for its origins.

Origins of a Humanistic Transhumanism

In 1950, Pierre Teilhard de Chardin (1881–1955) published the essay From the Pre-Human to the Ultra-Human: The Phases of a Living Planet, in which he speaks of ‘some sort of Trans-Human at the ultimate heart of things’. Teilhard de Chardin’s Ultra-Human and Trans-Human were evolutionary concepts linked with spiritual/human futures. These concepts inspired his friend Sir Julian Huxley to write about transhumanism, which he did in 1957 as follows [Huxley’s italics]:

The human species can, if it wishes, transcend itself—not just sporadically, an individual here in one way, an individual there in another way—but in its entirety, as humanity. We need a name for this new belief. Perhaps transhumanism will serve: man remaining man, but transcending himself, by realising new possibilities of and for his human nature (Huxley, 1957).

Ironically, this quote is used by techno-transhumanists to attribute to Huxley the coining of the term transhumanism. And yet, their use of the term is in direct contradiction to Huxley’s use. Huxley, a biologist and humanitarian, was the first Director-General of UNESCO in 1946, and the first President of the British Humanist Association. His transhumanism was more humanistic and spiritual than technological, inspired by Teilhard de Chardin’s spiritually evolved human. These two collaborators promoted the idea of conscious evolution, which originated with the German romantic philosopher Schelling.

The evolutionary ideas that were in discussion the century before Darwin were focused on consciousness and theories of human progress as a cultural, aesthetic, and spiritual ideal. Late 18th-century German philosophers foreshadowed the 20th-century human potential and positive psychology movements. To support their evolutionary ideals for society they created a universal education system, the aim of which was to develop the whole person (Bildung in German) (Gidley, 2016).

After Darwin, two notable European philosophers began to explore the impact of Darwinian evolution on human futures, in other ways than Spencer’s social Darwinism. Friedrich Nietzsche’s ideas about the higher person (Übermensch) were informed by Darwin’s biological evolution, the German idealist writings on evolution of consciousness, and were deeply connected to his ideas on freedom.

French philosopher Henri Bergson’s contribution to the superhuman discourse first appeared in Creative Evolution (Bergson, 1907/1944). Like Nietzsche, Bergson saw the superman arising out of the human being, in much the same way that humans have arisen from animals. In parallel with the efforts of Nietzsche and Bergson, Rudolf Steiner articulated his own ideas on evolving human-centred futures, with concepts such as spirit self and spirit man (between 1904 and 1925) (Steiner, 1926/1966). During the same period Indian political activist Sri Aurobindo wrote about the Overman who was a type of consciously evolving future human being (Aurobindo, 1914/2000). Both Steiner and Sri Aurobindo founded education systems after the German bildung style of holistic human development.

Consciously Evolving Human-Centred Futures

There are three major bodies of research offering counterpoints to the techno-transhumanist claim that superhuman powers can only be reached through technological, biological, or genetic enhancement. Extensive research shows that humans have far greater capacities across many domains than we realise. In brief, these themes are the future of the body, cultural evolution and futures of thinking.

Michael Murphy’s book The Future of the Body documents ‘superhuman powers’ unrelated to technological or biological enhancement (Murphy, 1992). For forty years Murphy, founder of Esalen Institute, has been researching what he calls a Natural History of Supernormal Attributes. He has developed an archive of 10,000 studies of individual humans, throughout history, who have demonstrated supernormal experiences across twelve groups of attributes. In almost 800 pages Murphy documents the supernormal capacities of Catholic mystics, Sufi ecstatics, Hindi-Buddhist siddhis, martial arts practitioners, and elite athletes. Murphy concludes that these extreme examples are the ‘developing limbs and organs of our evolving human nature’. We also know from the examples of savants, extreme sport and adventure, and narratives of mystics and saints from the vast literature from the perennial philosophies, that we humans have always extended ourselves—often using little more than the power of our minds.

Regarding cultural evolution, numerous 20th century scholars and writers have put forward ideas about human cultural futures. Ervin László links evolution of consciousness with global planetary shifts (László, 2006). Richard Tarnas in The Passion of the Western Mind traces socio-cultural developments over the last 2,000 years, pointing to emergent changes (Tarnas, 1991). Jürgen Habermas suggests a similar developmental pattern in his book Communication and the Evolution of Society (Habermas, 1979). In the late 1990s Duane Elgin and Coleen LeDrew undertook a forty-three-nation World Values Survey, including Scandinavia, Switzerland, Britain, Canada, and the United States. They concluded, ‘a new global culture and consciousness have taken root and are beginning to grow in the world’. They called it the postmodern shift and described it as having two qualities: an ecological perspective and a self-reflexive ability (Elgin & LeDrew, 1997).

In relation to futures of thinking, adult developmental psychologists have built on positive psychology, and the human potential movement beginning with Abraham Maslow’s book Further Reaches of Human Nature (Maslow, 1971). In combination with transpersonal psychology the research is rich with extended views of human futures in cognitive, emotional, and spiritual domains. For four decades, adult developmental psychology researchers such as Michael Commons, Jan Sinnott, and Lawrence Kohlberg have been researching the systematic, pluralistic, complex, and integrated thinking of mature adults (Commons & Ross, 2008; Kohlberg, 1990; Sinnott, 1998). They call this mature thought ‘postformal reasoning’ and their research provides valuable insights into higher modes of reasoning that are central to the discourse on futures of thinking. Features they identify include complex paradoxical thinking, creativity and imagination, relativism and pluralism, self-reflection and ability to dialogue, and intuition. Ken Wilber’s integral psychology research complements his cultural history research to build a significantly enhanced image of the potential for consciously evolving human futures (Wilber, 2000).

I apply these findings to education in my book Postformal Education: A Philosophy for Complex Futures (Gidley, 2016).

Can AI ever cross the Consciousness Threshold?

Given the breadth and subtlety of postformal reasoning, how likely is it that machines could ever acquire such higher functioning human features? The technotopians discussing artificial superhuman intelligence carefully avoid the consciousness question. Bostrom explains that all the machine intelligence systems currently in use operate in a very narrow range of human cognitive capacity (weak AI). Even at its most ambitious, it is limited to trying to replicate ‘abstract reasoning and general problem-solving skills’ (strong AI). In spite of all the hype around AI and ASI, the Machine Intelligence Research Institute (MIRI)’s own website states that even ‘human-equivalent general intelligence is still largely relegated to the science fiction shelf.’ Regardless of who writes about posthumanism, and whether they are Oxford philosophers, MIT scientists, or Google engineers, they do not yet appear to be aware that there are higher forms of human reasoning than their own. Nor do they have the scientific and technological means to deliver on their high-budget fantasies. Machine super-intelligence is not only an oxymoron, but a science fiction concept.

Even if techno-developers were to succeed in replicating general intelligence (strong AI), it would only function at the level of Piaget’s formal operations. Yet adult developmental psychologists have shown that mature, high-functioning adults are capable of very complex, imaginative, integrative, paradoxical, spiritual, intuitive wisdom—just to name a few of the qualities we humans can consciously evolve. These complex postformal logics go far beyond the binary logic used in coding and programming machines, and it seems also far beyond the conceptual parameters of the AI programmers themselves. I find no evidence in the literature that anyone working with AI is aware of either the limits of formal reasoning or the vast potential of higher stages of postformal reasoning. In short, ASI proponents are entrapped in their thin cybernetic view of intelligence. As such they are oblivious to the research on evolution of consciousness, metaphysics of mind, multiple intelligences, philosophy and psychology of consciousness, transpersonal psychology and wisdom studies, all providing ample evidence that human intelligence is highly complex and evolving.

When all of this research is taken together it indicates that we humans are already capable of far greater powers of mind, emotion, body, and spirit than previously imagined. If we seriously want to develop superhuman intelligence and powers in the 21st century and beyond we have a choice. We can continue to invest heavily in naïve technotopian dreams of creating machines that can operate better than humans. Or we can invest more of our consciousness, energy, and resources on educating and consciously evolving human futures with all the wisdom that would entail.

About Professor Jennifer M. Gidley PhD

Author, psychologist, educator and futurist, Jennifer is a global thought leader and advocate for human-centred futures in an era of hi-tech hype and hubris. She is Adjunct Professor at the Institute for Sustainable Futures, UTS, Sydney and author of The Future: A Very Short Introduction (Oxford, 2017) and Postformal Education: A Philosophy for Complex Futures (Springer, 2016). As former President of the World Futures Studies Federation (2009−2017), a UNESCO and UN ECOSOC partner and global peak body for futures studies, Jennifer led a network of hundreds of the world’s leading futures scholars and researchers from over 60 countries for eight years.

References

[To check references please go to original article in Paradigm Explorer, p. 15–18]

The future of cancer care should mean more cost-effective treatments, a greater focus on prevention, and a new mindset: A Surgical Oncologist’s take

Multidisciplinary team management of many types of cancer has led to significant improvements in median and overall survival. Unfortunately, there are still other cancers which we have impacted little. In patients with pancreatic adenocarcinoma and hepatocellular cancer, we have been able to improve median survival only by a matter of a few months, and at a cost of toxicity associated with the treatments. From the point of view of a surgical oncologist, I believe there will be rapid advances over the next several decades.

Robotic Surgery

There is already one surgery robot system on the market and another will soon be available. The advances in robotics and imaging have allowed for improved 3-dimensional spacial recognition of anatomy, and the range of movement of instruments will continue to improve. Real-time haptic feedback may become possible with enhanced neural network systems. It is already possible to perform some operations with greater facility, such as very low sphincter-sparing operations for rectal adenocarcinoma in patients who previously would have required a permanent colostomy. As surgeons’ ability and experience with new robotic equipment becomes greater, the number and types of operation performed will increase and patient recovery time, length of hospital stay, and return to full functional status will improve. Competition may drive down the exorbitant cost of current equipment.

More Cost Effective Screening

The mapping of the human genome was a phenomenal project and achievement. However, we still do not understand the function of all of the genes identified or the complex interactions with other molecules in the nucleus. We also forget that cancer is a perfect experiment in evolutionary biology. Once cancer has developed, we begin treatments with cytotoxic chemotherapy drugs, targeted agents, immunotherapies, and ionizing radiation. Many of the treatments are themselves mutagenic, and place selection pressure on cells with beneficial mutations allowing them to evade response or repair damage caused by the treatment, survive, multiply, and metastasize. In some patients who are seeming success stories, new cancers develop years or decades later, induced by our therapies to treat their initial cancer. Currently, we place far too little emphasis on screening and prevention of cancer. Hopefully, in the not too distant future, screening of patients with simple, readily available, and inexpensive blood tests looking at circulating cells and free DNA may allow us to recognize patients at high risk to develop certain malignancies, or to detect cancer at far earlier stages when surgical and other therapies have a higher probability of success.

Changing the Mindset

A diagnosis of cancer incites fear and uncertainty in patients and their family members. Many feel they are receiving a certain death sentence. While we have improved the probability of long-term success with some cancers, there are others where we have simply shifted the survival curve to produce a few more months of survival before the patient succumbs. We need to adopt strategies that allow us to contain and control malignant disease without necessarily eradicating it. If a tumor or tumors are in a dormant or senescent state and not causing symptoms or problems, minimally toxic treatments stopping tumor growth and progression allowing the patient to live a normal and productive life would be a success. Patients with a diagnosis of diabetes are never “cured” of their diabetes, but with proper medical management their disease can be controlled and they can survive and function without any of the negative consequences and sequelae of the disease. If we can understand genetic signaling and aberrations sufficiently, perhaps we can control cancer for long periods while maintaining a high quality of life for our patients.

Taking on Tough Political Issues

I am often asked by patients if I believe there will ever be a “cure” for cancer. I invariably reply it is unlikely if we continue to engage in activities and behaviors which increase the likelihood of developing cancer. Cigarette smoking, smokeless tobacco use, excess alcohol or food intake, lack of exercise, and pollution of the environment around us produce carcinogens or conditions increasing the risk of cancer development. Unless we find the courage and strength to limit access or ban substances that are known carcinogens, like cigarettes, and begin as thoughtful citizens of the planet behaving in a more responsible fashion to eliminate air, ground, and water pollution, we will not make a significant impact on the incidence of cancer. We must also be willing to develop greater and more far reaching population education programs about things as simple as proper ultraviolet light protection during sun exposure, and to recognize tanning beds or excessive, unprotected natural sunlight exposure increases the risk of a particularly difficult and vicious malignancy, melanoma. Whether we like to admit it or not, humans respond to societal pressures and images displayed or touted by media, marketing firms, or so-called beauty and glamor outlets that may actually be harmful to the health of the populace. People do and should have a free will, but they should also be given understandable, honest, and rational information on the potential consequences of their choices. There should also be a higher level of personal accountability and responsibility for negative outcomes based on an individual’s choices.

Global Cancer Care

It is estimated that between half and two thirds of the world’s population, particularly in poor or developing countries, have limited or no access to cancer prevention, screening, or care. The improved outcomes we report in medical and surgical journals from advanced countries assume the treatment can be paid for and access is available to all. Nothing is further from the truth. Meaningful efforts to rein in the rampant increases in cancer drug costs, reduce the prohibitively long and expensive process to develop and approve a novel treatment, and to provide training and education for practitioners in developing countries must be made. The disparities even within the United States are great, and it is well known and documented that disadvantage populations are often diagnosed with later stage disease, and generally have reduced chances of long-term success with the treatments available. We must become inclusive, not exclusive, in our worldview and through outreach and development programs begin to build infrastructure and access to affordable care worldwide.

Thinking Outside the Box

Personalized or individualized patient cancer care is a popular buzz phrase these days. In reality, we currently have very few drugs or targeted agents to act upon the numerous genetic or epigenetic abnormalities present in the average cancer. To search for drugs to new targets or abnormal pathways, we must create a system where there is rapid assessment, cost effectiveness, and streamlined regulatory approval for patients with lethal diseases. Personalized cancer treatment is not affordable without major changes in policy and practice. We should recognize malignant tumors have interesting physicochemical and electrical properties different from the normal tissues from which they arise. Therapy with electromagnetic fields specifically tailored to a given patient’s tumor properties can enhance tumor blood flow and improve delivery of drugs or agents while reducing toxicity and side effects. Developing approaches that do not produce acute and long-term side effects or an increased risk to develop second malignancies must be a priority.

Science and technology information is being produced at an incomprehensible rate. We need help from specialized colleagues with big data management and recognition of trends and developments which can be quickly disseminated throughout the medical community, and to appropriate patient populations. All of these measures require commitment and dedication to changing the way we think, reversing priorities based far too much on profitability of treatments rather than availability and affordability of treatment, and we cannot ignore the importance of programs to improve cancer prevention, screening, and early diagnosis.

Posthumanists and perhaps especially transhumanists tend to downplay the value conflicts that are likely to emerge in the wake of a rapidly changing technoscientific landscape. What follows are six questions and scenarios that are designed to focus thinking by drawing together several tendencies that are not normally related to each other but which nevertheless provide the basis for future value conflicts.

  1. Will ecological thinking eventuate in an instrumentalization of life? Generally speaking, biology – especially when a nervous system is involved — is more energy efficient when it comes to storing, accessing and processing information than even the best silicon-based computers. While we still don’t quite know why this is the case, we are nevertheless acquiring greater powers of ‘informing’ biological processes through strategic interventions, ranging from correcting ‘genetic errors’ to growing purpose-made organs, including neurons, from stem-cells. In that case, might we not ‘grow’ some organs to function in largely the same capacity as silicon-based computers – especially if it helps to reduce the overall burden that human activity places on the planet? (E.g. the brains in the vats in the film The Minority Report which engage in the precognition of crime.) In other words, this new ‘instrumentalization of life’ may be the most environmentally friendly way to prolong our own survival. But is this a good enough reason? Would these specially created organic thought-beings require legal protection or even rights? The environmental movement has been, generally speaking, against the multiplication of artificial life forms (e.g. the controversies surrounding genetically modified organisms), but in this scenario these life forms would potentially provide a means to achieve ecologically friendly goals.

  1. Will concerns for social justice force us to enhance animals? We are becoming more capable of recognizing and decoding animal thoughts and feelings, a fact which has helped to bolster those concerned with animal welfare, not to mention ‘animal rights’. At the same time, we are also developing prosthetic devices (of the sort already worn by Steven Hawking) which can enhance the powers of disabled humans so their thoughts and feelings are can be communicated to a wider audience and hence enable them to participate in society more effectively. Might we not wish to apply similar prosthetics to animals – and perhaps even ourselves — in order to facilitate the transaction of thoughts and feelings between humans and animals? This proposal might aim ultimately to secure some mutually agreeable ‘social contract’, whereby animals are incorporated more explicitly in the human life-world — not as merely wards but as something closer to citizens. (See, e.g., Donaldson and Kymlicka’s Zoopolis.) However, would this set of policy initiatives constitute a violation of the animals’ species integrity and simply be a more insidious form of human domination?

  1. Will human longevity stifle the prospects for social renewal? For the past 150 years, medicine has been preoccupied with the defeat of death, starting from reducing infant mortality to extending the human lifespan indefinitely. However, we also see that as people live longer, healthier lives, they also tend to have fewer children. This has already created a pensions crisis in welfare states, in which the diminishing ranks of the next generation work to sustain people who live long beyond the retirement age. How do we prevent this impending intergenerational conflict? Moreover, precisely because each successive generation enters the world without the burden of the previous generations’ memories, it is better disposed to strike in new directions. All told then, then, should death become discretionary in the future, with a positive revaluation of suicide and euthanasia? Moreover, should people be incentivized to have children as part of a societal innovation strategy?

  1. Will the end of death trivialize life? A set of trends taken together call into question the finality of death, which is significant because strong normative attitudes against murder and extinction are due largely to the putative irreversibility of these states. Indeed, some have argued that the sanctity – if not the very meaning — of human life itself is intimately related to the finality of death. However, there is a concerted effort to change all this – including cryonics, digital emulations of the brain, DNA-driven ‘de-extinction’ of past species, etc. Should these technologies be allowed to flourish, in effect, to ‘resurrect’ the deceased? As it happens, ‘rights of the dead’ are not recognized in human rights legislation and environmentalists generally oppose introducing new species to the ecology, which would seem to include not only brand new organisms but also those which once roamed the earth.

  1. Will political systems be capable of delivering on visions of future human income? There are two general visions of how humans will earn their keep in the future, especially in light of what is projected to be mass technologically induced unemployment, which will include many ordinary professional jobs. One would be to provide humans with a ‘universal basic income’ funded by some tax on the producers of labour redundancy in both the industrial and the professional classes. The other vision is that people would be provided regular ‘micropayments’ based on the information they routinely provide over the internet, which is becoming the universal interface for human expression. The first vision cuts against the general ‘lower tax’ and ‘anti-redistributive’ mindset of the post-Cold War era, whereas the latter vision cuts against perceived public preference for the maintenance of privacy in the face of government surveillance. In effect, both visions of future human income demand that the state reinvents its modern role as guarantor of, respectively, welfare and security – yet now against the backdrop of rapid technological change and laissez faire cultural tendencies.

  1. Will greater information access turn ‘poverty’ into a lifestyle prejudice? Mobile phone penetration is greater in some impoverished parts of Africa and Asia than in the United States and some other developed countries. While this has made the developed world more informationally available to the developing world, the impact of this technology on the latter’s living conditions has been decidedly mixed. Meanwhile as we come to a greater understanding of the physiology of impoverished people, we realize that their nervous systems are well adapted to conditions of extreme stress, as are their cultures more generally. (See e.g. Banerjee and Duflo’s Poor Economics.) In that case, there may come a point when the rationale for ‘development aid’ might disappear, and ‘poverty’ itself may be seen as a prejudicial term. Of course, the developing world may continue to require external assistance in dealing with wars and other (by their standards) extreme conditions, just as any other society might. But otherwise, we might decide in an anti-paternalistic spirit that they should be seen as sufficiently knowledgeable of their own interests to be able to lead what people in the developed world might generally regard as a suboptimal existence – one in which, say, the life expectancies between those in the developing and developed worlds remain significant and quite possibly increase over time.