Toggle light / dark theme

1. Thou shalt first guard the Earth and preserve humanity.

Impact deflection and survival colonies hold the moral high ground above all other calls on public funds.

2. Thou shalt go into space with heavy lift rockets with hydrogen upper stages and not go extinct.

The human race can only go in one of two directions; space or extinction- right now we are an endangered species.

3. Thou shalt use the power of the atom to live on other worlds.

Nuclear energy is to the space age as steam was to the industrial revolution; chemical propulsion is useless for interplanetary travel and there is no solar energy in the outer solar system.

4. Thou shalt use nuclear weapons to travel through space.

Physical matter can barely contain chemical reactions; the only way to effectively harness nuclear energy to propel spaceships is to avoid containment problems completely- with bombs.

5. Thou shalt gather ice on the Moon as a shield and travel outbound.

The Moon has water for the minimum 14 foot thick radiation shield and is a safe place to light off a bomb propulsion system; it is the starting gate.

6. Thou shalt spin thy spaceships and rings and hollow spheres to create gravity and thrive.

Humankind requires Earth gravity and radiation to travel for years through space; anything less is a guarantee of failure.

7. Thou shalt harvest the Sun on the Moon and use the energy to power the Earth and propel spaceships with mighty beams.

8. Thou shalt freeze without damage the old and sick and revive them when a cure is found; only an indefinite lifespan will allow humankind to combine and survive. Only with this reprieve can we sleep and reach the stars.

9. Thou shalt build solar power stations in space hundreds of miles in diameter and with this power manufacture small black holes for starship engines.

10. Thou shalt build artificial intellects and with these beings escape the death of the universe and resurrect all who have died, joining all minds on a new plane.

YANKEE.BRAIN.MAP
The Brain Games Begin
Europe’s billion-Euro science-neuro Human Brain Project, mentioned here amongst machine morality last week, is basically already funded and well underway. Now the colonies over in the new world are getting hip, and they too have in the works a project to map/simulate/make their very own copy of the universe’s greatest known computational artifact: the gelatinous wad of convoluted electrical pudding in your skull.

The (speculated but not yet public) Brain Activity Map of America
About 300 different news sources are reporting that a Brain Activity Map project is outlined in the current administration’s to-be-presented budget, and will be detailed sometime in March. Hoards of journalists are calling it “Obama’s Brain Project,” which is stoopid, and probably only because some guy at the New Yorker did and they all decided that’s what they had to do, too. Or somesuch lameness. Or laziness? Deference? SEO?

For reasons both economic and nationalistic, America could definitely use an inspirational, large-scale scientific project right about now. Because seriously, aside from going full-Pavlov over the next iPhone, what do we really have to look forward to these days? Now, if some technotards or bible pounders monkeywrench the deal, the U.S. is going to continue that slide toward scientific… lesserness. So, hippies, religious nuts, and all you little sociopathic babies in politics: zip it. Perhaps, however, we should gently poke and prod the hard of thinking toward a marginally heightened Europhobia — that way they’ll support the project. And it’s worth it. Just, you know, for science.

Going Big. Not Huge, But Big. But Could be Massive.
Both the Euro and American flavors are no Manhattan Project-scale undertaking, in the sense of urgency and motivational factors, but more like the Human Genome Project. Still, with clear directives and similar funding levels (€1 billion Euros & $1–3 billion US bucks, respectively), they’re quite ambitious and potentially far more world changing than a big bomb. Like, seriously, man. Because brains build bombs. But hopefully an artificial brain would not. Spaceships would be nice, though.

Practically, these projects are expected to expand our understanding of the actual physical loci of human behavioral patterns, get to the bottom of various brain pathologies, stimulate the creation of more advanced AI/non-biological intelligence — and, of course, the big enchilada: help us understand more about our own species’ consciousness.

On Consciousness: My Simulated Brain has an Attitude?
Yes, of course it’s wild speculation to guess at the feelings and worries and conundrums of a simulated brain — but dude, what if, what if one or both of these brain simulation map thingys is done well enough that it shows signs of spontaneous, autonomous reaction? What if it tries to like, you know, do something awesome like self-reorganize, or evolve or something?

Maybe it’s too early to talk personality, but you kinda have to wonder… would the Euro-Brain be smug, never stop claiming superior education yet voraciously consume American culture, and perhaps cultivate a mild racism? Would the ‘Merica-Brain have a nation-scale authority complex, unjustifiable confidence & optimism, still believe in childish romantic love, and overuse the words “dude” and “awesome?”

We shall see. We shall see.

Oh yeah, have to ask:
Anyone going to follow Ray Kurzweil’s recipe?

Project info:
[HUMAN BRAIN PROJECT - - MAIN SITE]
[THE BRAIN ACTIVITY MAP - $ - HUFF-PO]

Kinda Pretty Much Related:
[BLUE BRAIN PROJECT]

This piece originally appeared at Anthrobotic.com on February 28, 2013.

KILL.THE.ROBOTS
The Golden Rule is Not for Toasters

Simplistically nutshelled, talking about machine morality is picking apart whether or not we’ll someday have to be nice to machines or demand that they be nice to us.

Well, it’s always a good time to address human & machine morality vis-à-vis both the engineering and philosophical issues intrinsic to the qualification and validation of non-biological intelligence and/or consciousness that, if manifested, would wholly justify consideration thereof.

Uhh… yep!

But, whether at run-on sentence dorkville or any other tech forum, right from the jump one should know that a single voice rapping about machine morality is bound to get hung up in and blinded by its own perspective, e.g., splitting hairs to decide who or what deserves moral treatment (if a definition of that can even be nailed down), or perhaps yet another justification for the standard intellectual cul de sac:
“Why bother, it’s never going to happen.“
That’s tired and lame.

One voice, one study, or one robot fetishist with a digital bullhorn — one ain’t enough. So, presented and recommended here is a broad-based overview, a selection of the past year’s standout pieces on machine morality.The first, only a few days old, is actually an announcement of intent that could pave the way to forcing the actual question.
Let’s then have perspective:

Building a Brain — Being Humane — Feeling our Pain — Dude from the NYT
February 3, 2013 — Human Brain Project: Simulate One
Serious Euro-Science to simulate a human brain. Will it behave? Will we?

January 28, 2013 — NPR: No Mercy for Robots
A study of reciprocity and punitive reaction to non-human actors. Bad robot.

April 25, 2012 — IEEE Spectrum: Attributing Moral Accountability to Robots
On the human expectation of machine morality. They should be nice to me.

December 25, 2011 — NYT: The Future of Moral Machines
Engineering (at least functional) machine morality. Broad strokes NYT-style.

Expectations More Human than Human?
Now, of course you’re going to check out those pieces you just skimmed over, after you finish trudging through this anti-brevity technosnark©®™ hybrid, of course. When you do — you might notice the troubling rub of expectation dichotomy. Simply put, these studies and reports point to a potential showdown between how we treat our machines, how we might expect others to treat them, and how we might one day expect to be treated by them. For now morality is irrelevant, it is of no consideration nor consequence in our thoughts or intentions toward machines. But, at the same time we hold dear the expectation of reasonable treatment, if not moral, by any intelligent agent — even an only vaguely human robot.

Well what if, for example: 1. AI matures, and 2. machines really start to look like us?
(see: Leaping Across Mori’s Uncanny Valley: Androids Probably Won’t Creep Us Out)

Even now should someone attempt to smash your smartphone or laptop (or just touch it), you of course protect the machine. Extending beyond concerns over the mere destruction of property or loss of labor, could one morally abide harm done to one’s marginally convincing humanlike companion? Even if fully accepting of its artificiality, where would one draw the line between economic and emotional damage? Or, potentially, could the machine itself abide harm done to it? Even if imbued with a perfectly coded algorithmic moral code mandating “do no harm,” could a machine calculate its passive non-response to intentional damage as an immoral act against itself, and then react?

Yeah, these hypotheticals can go on forever, but it’s clear that blithely ignoring machine morality or overzealously attempting to engineer it might result in… immorality.

Probably Only a Temporary Non-Issue. Or Maybe. Maybe Not.
There’s an argument that actually needing to practically implement or codify machine morality is so remote that debate is, now and forever, only that — and oh wow, that opinion is superbly dumb. This author has addressed this staggeringly arrogant species-level macro-narcissism before (and it was awesome). See, outright dismissal isn’t a dumb argument because a self-aware machine or something close enough for us to regard as such is without doubt going to happen, it’s dumb because 1. absolutism is fascist, and 2. to the best of our knowledge, excluding the magic touch of Jesus & friends or aliens spiking our genetic punch or whatever, conscious and/or self-aware intelligence (which would require moral consideration) appears to be an emergent trait of massively powerful computation. And we’re getting really good at making machines do that.

Whatever the challenge, humans rarely avoid stabbing toward the supposedly impossible — and a lot of the time, we do land on the moon. The above mentioned Euro-project says it’ll need 10 years to crank out a human brain simulation. Okay, respectable. But, a working draft of the human genome, an initially 15-year international project, was completed 5 years ahead of schedule due largely to advances in brute force computational capability (in the not so digital 1990s). All that computery stuff like, you know, gets better a lot faster these days. Just sayin.

So, you know, might be a good idea to keep hashing out ideas on machine morality.
Because who knows what we might end up with…

Oh sure, I understand, turn me off, erase me — time for a better model, I totally get it.
- or -
Hey, meatsack, don’t touch me or I’ll reformat your squishy face!

Choose your own adventure!

[HUMAN BRAIN PROJECT]
[NO MERCY FOR ROBOTS — NPR]
[ATTRIBUTING MORAL ACCOUNTABILITY TO ROBOTS — IEEE]
[THE FUTURE OF MORAL MACHINES — NYT]

This piece originally appeared at Anthrobotic.com on February 7, 2013.

http://www.sciencedaily.com/releases/2012/09/120905134912.htm

It is a race against time- will this knowledge save us or destroy us? Genetic modification may eventually reverse aging and bring about a new age but it is more likely the end of the world is coming.

The Fermi Paradox informs us that intelligent life may not be intelligent enough to keep from destroying itself. Nothing will destroy us faster or more certainly than an engineered pathogen (except possibly an asteroid or comet impact). The only answer to this threat is an off world survival colony. Ceres would be perfect.

Whether via spintronics or some quantum breakthrough, artificial intelligence and the bizarre idea of intellects far greater than ours will soon have to be faced.

http://www.sciencedaily.com/releases/2012/08/120819153743.htm

AI scientist Hugo de Garis has prophesied the next great historical conflict will be between those who would build gods and those who would stop them.

It seems to be happening before our eyes as the incredible pace of scientific discovery leaves our imaginations behind.

We need only flush the toilet to power the artificial mega mind coming into existence within the next few decades. I am actually not intentionally trying to write anything bizarre- it is just this strange planet we are living on.

http://www.sciencedaily.com/releases/2012/08/120813155525.htm

http://www.sciencedaily.com/releases/2012/08/120813123034.htm

Emanuel Pastreich

Professor

Kyung Hee University

June 9, 2012

The Crisis in Education in Korea and the World

The suicide of four students at KAIST in Korea last year has made it apparent that there is something fundamentally wrong with the manner in which our children are educated. It is not an issue of one test system over another, or the amount of studying students must do. Although KAIST keeps rising in its Continue reading “The Crisis in Education in Korea and the World” | >

Artificial brain ’10 years away’

By Jonathan Fildes
Technology reporter, BBC News, Oxford

A detailed, functional artificial human brain can be built within the next 10 years, a leading scientist has claimed.

Henry Markram, director of the Blue Brain Project, has already simulated elements of a rat brain.

He told the TED Global conference in Oxford that a synthetic human brain would be of particular use finding treatments for mental illnesses.

Around two billion people are thought to suffer some kind of brain impairment, he said.

“It is not impossible to build a human brain and we can do it in 10 years,” he said.

“And if we do succeed, we will send a hologram to TED to talk.”

‘Shared fabric’

The Blue Brain project was launched in 2005 and aims to reverse engineer the mammalian brain from laboratory data.

In particular, his team has focused on the neocortical column — repetitive units of the mammalian brain known as the neocortex.

Neurons

The team are trying to reverse engineer the brain

“It’s a new brain,” he explained. “The mammals needed it because they had to cope with parenthood, social interactions complex cognitive functions.

“It was so successful an evolution from mouse to man it expanded about a thousand fold in terms of the numbers of units to produce this almost frightening organ.”

And that evolution continues, he said. “It is evolving at an enormous speed.”

Over the last 15 years, Professor Markram and his team have picked apart the structure of the neocortical column.

“It’s a bit like going and cataloguing a bit of the rainforest — how may trees does it have, what shape are the trees, how many of each type of tree do we have, what is the position of the trees,” he said.

“But it is a bit more than cataloguing because you have to describe and discover all the rules of communication, the rules of connectivity.”

The project now has a software model of “tens of thousands” of neurons — each one of which is different — which has allowed them to digitally construct an artificial neocortical column.

Although each neuron is unique, the team has found the patterns of circuitry in different brains have common patterns.

“Even though your brain may be smaller, bigger, may have different morphologies of neurons — we do actually share the same fabric,” he said.

“And we think this is species specific, which could explain why we can’t communicate across species.”

World view

To make the model come alive, the team feeds the models and a few algorithms into a supercomputer.

“You need one laptop to do all the calculations for one neuron,” he said. “So you need ten thousand laptops.”

Computer-generated image of a human brain

The research could give insights into brain disease

Instead, he uses an IBM Blue Gene machine with 10,000 processors.

Simulations have started to give the researchers clues about how the brain works.

For example, they can show the brain a picture — say, of a flower — and follow the electrical activity in the machine.

“You excite the system and it actually creates its own representation,” he said.

Ultimately, the aim would be to extract that representation and project it so that researchers could see directly how a brain perceives the world.

But as well as advancing neuroscience and philosophy, the Blue Brain project has other practical applications.

For example, by pooling all the world’s neuroscience data on animals — to create a “Noah’s Ark”, researchers may be able to build animal models.

“We cannot keep on doing animal experiments forever,” said Professor Markram.

It may also give researchers new insights into diseases of the brain.

“There are two billion people on the planet affected by mental disorder,” he told the audience.

The project may give insights into new treatments, he said.

The TED Global conference runs from 21 to 24 July in Oxford, UK.


Singularity Hub

Create an AI on Your Computer

Written on May 28, 2009 – 11:48 am | by Aaron Saenz |

If many hands make light work, then maybe many computers can make an artificial brain. That’s the basic reasoning behind Intelligence Realm’s Artificial Intelligence project. By reverse engineering the brain through a simulation spread out over many different personal computers, Intelligence Realm hopes to create an AI from the ground-up, one neuron at a time. The first waves of simulation are already proving successful, with over 14,000 computers used and 740 billion neurons modeled. Singularity Hub managed to snag the project’s leader, Ovidiu Anghelidi, for an interview: see the full text at the end of this article.

The ultimate goal of Intelligence Realm is to create an AI or multiple AIs, and use these intelligences in scientific endeavors. By focusing on the human brain as a prototype, they can create an intelligence that solves problems and “thinks” like a human. This is akin to the work done at FACETS that Singularity Hub highlighted some weeks ago. The largest difference between Intelligence Realm and FACETS is that Intelligence Realm is relying on a purely simulated/software approach.

Which sort of makes Intelligence Realm similar to the Blue Brain Project that Singularity Hub also discussed. Both are computer simulations of neurons in the brain, but Blue Brain’s ultimate goal is to better understand neurological functions, while Intelligence Realm is seeking to eventually create an AI. In either case, to successfully simulate the brain in software alone, you need a lot of computing power. Blue Brain runs off a high-tech supercomputer, a resource that’s pretty much exclusive to that project. Even with that impressive commodity, Blue Brain is hitting the limit of what it can simulate. There’s too much to model for just one computer alone, no matter how powerful. Intelligence Realm is using a distributed computing solution. Where one computer cluster alone may fail, many working together may succeed. Which is why Intelligence Realm is looking for help.

The AI system project is actively recruiting, with more than 6700 volunteers answering the call. Each volunteer runs a small portion of the larger simulation on their computer(s) and then ships the results back to the main server. BOINC, the Berkeley built distributed computing software that makes it all possible, manages the flow of data back and forth. It’s the same software used for SETI’s distributed computing processing. Joining the project is pretty simple: you just download BOINC, some other data files, and you’re good to go. You can run the simulation as an application, or as part of your screen saver.

Baby Steps

So, 6700 volunteers, 14,000 or so platforms, 740 billion neurons, but what is the simulated brain actually thinking? Not a lot at the moment. The same is true with the Blue Brain Project, or FACETS. Simulating a complex organ like the brain is a slow process, and the first steps are focused on understanding how the thing actually works. Inputs (Intelligence Realm is using text strings) are converted into neuronal signals, those signals are allowed to interact in the simulation and the end state is converted back to an output. It’s a time and labor (computation) intensive process. Right now, Intelligence Realm is just building towards simple arithmetic.

Which is definitely a baby step, but there are more steps ahead. Intelligence Realm plans on learning how to map numbers to neurons, understanding the kind of patterns of neurons in your brain that represent numbers, and figuring out basic mathematical operators (addition, subtraction, etc). From these humble beginnings, more complex reasoning will emerge. At least, that’s the plan.

Intelligence Realm isn’t just building some sort of biophysical calculator. Their brain is being designed so that it can change and grow, just like a human brain. They’ve focused on simulating all parts of the brain (including the lower reasoning sections) and increasing the plasticity of their model. Right now it’s stumbling towards knowing 1+1 = 2. Even with linear growth they hope that this same stumbling intelligence will evolve into a mental giant. It’s a monumental task, though, and there’s no guarantee it will work. Building artificial intelligence is probably one of the most difficult tasks to undertake, and this early in the game, it’s hard to see if the baby steps will develop into adult strides. The simulation process may not even be the right approach. It’s a valuable experiment for what it can teach us about the brain, but it may never create an AI. A larger question may be, do we want it to?

Knock, Knock…It’s Inevitability

With the newest Terminator movie out, it’s only natural to start worrying about the dangers of artificial intelligence again. Why build these things if they’re just going to hunt down Christian Bale? For many, the threats of artificial intelligence make it seem like an effort of self-destructive curiosity. After all, from Shelley’s Frankenstein Monster to Adam and Eve, Western civilization seems to believe that creations always end up turning on their creators.

AI, however, promises rewards as well as threats. Problems in chemistry, biology, physics, economics, engineering, and astronomy, even questions of philosophy could all be helped by the application of an advanced AI. What’s more, as we seek to upgrade ourselves through cybernetics and genetic engineering, we will become more artificial. In the end, the line between artificial and natural intelligence may be blurred to a point that AIs will seem like our equals, not our eventual oppressors. However, that’s not a path that everyone will necessarily want to walk down.

Will AI and Humans learn to co-exist?

Will AI and Humans learn to co-exist?

The nature of distributed computing and BOINC allow you to effectively vote on whether or not this project will succeed. Intelligence Realm will eventually need hundred of thousands if not millions of computing platforms to run their simulations. If you believe that AI deserves a chance to exist, give them a hand and recruit others. If you think we’re building our own destroyers, then don’t run the program. In the end, the success or failure of this project may very well depend on how many volunteers are willing to serve as mid-wives to a new form of intelligence.

Before you make your decision though, make sure to read the following interview. As project leader, Ovidiu Anghelidi is one of the driving minds behind reverse engineering the brain and developing the eventual AI that Intelligence Realm hopes to build. He’s didn’t mean for this to be a recruiting speech, but he makes some good points:

SH: Hello. Could you please start by giving yourself and your project a brief introduction?

OA: Hi. My name is Ovidiu Anghelidi and I am working on a distributed computing project involving thousands of computers in the field of artificial intelligence. Our goal is to develop a system that can perform automated research.

What drew you to this project?

During my adolescence I tried understanding the nature of question. I used extensively questions as a learning tool. That drove me to search for better understanding methods. After looking at all kinds of methods, I kinda felt that understanding creativity is a worthier pursuit. Applying various methods of learning and understanding is a fine job, but finding outstanding solutions requires much more than that. For a short while I tried understanding how creativity is done and what exactly is it. I found out that there is not much work done on this subject, mainly because it is an overlapping concept. The search for creativity led me to the field of AI. Because one of the past presidents of the American Association of Artificial Intelligence dedicated an entire issue to this subject I started pursuing that direction. I looked into the field of artificial intelligence for a couple of years and at some point I was reading more and more papers that touched the subject of cognition and brain so I looked briefly into neuroscience. After I read an introductory book about neuroscience, I realized that understanding brain mechanisms is what I should have done all along, for the past 20 years. To this day I am pursuing this direction.

What’s your time table for success? How long till we have a distributed AI running around using your system?

I have been working on this project for about 3 years now, and I estimate that we will need another 7–8 years to finalize the project. Nonetheless we do not need that much time to be able to use some its features. I expect to have some basic features that work within a couple of months. Take for example the multiple simulations feature. If we want to pursue various directions in different fields (i.e. mathematics, biology, physics) we will need to set up a simulation for each field. But we do not need to get to the end of the project, to be able to run single simulations.

Do you think that Artificial Intelligence is a necessary step in the evolution of intelligence? If not, why pursue it? If so, does it have to happen at a given time?

I wouldn’t say necessary, because we don’t know what we are evolving towards. As long as we do not have the full picture from beginning to end, or cases from other species to compare our history to, we shouldn’t just assume that it is necessary.

We should pursue it with all our strength and understanding because soon enough it can give us a lot of answers about ourselves and this Universe. By soon I mean two or three decades. A very short time span, indeed. Artificial Intelligence will amplify a couple of orders of magnitude our research efforts across all disciplines.

In our case it is a natural extension. Any species that reaches a certain level of intelligence, at some point in time, they would start replicating and extending their natural capacities in order to control their environment. The human race did that for the last couple thousands of years, we tried to replicate and extend our capacity to run, see, smell and touch. Now it reached thinking. We invented vehicles, television sets, other devices and we are now close to have artificial intelligence.

What do you think are important short term and long term consequences of this project?

We hope that in short term we will create some awareness in regards to the benefits of artificial intelligence technology. Longer term it is hard to foresee.

How do you see Intelligence Realm interacting with more traditional research institutions? (Universities, peer reviewed Journals, etc)

Well…, we will not be able to provide full details about the entire project because we are pursuing a business model, so that we can support the project in the future, so there is little chance of a collaboration with a University or other research institution. Down the road, as we we will be in an advanced stage with the development, we will probably forge some collaborations. For the time being this doesn’t appear feasible. I am open to collaborations but I can’t see how that would happen.

I submitted some papers to a couple of journals in the past, but I usually receive suggestions that I should look at other journals, from other fields. Most of the work in artificial intelligence doesn’t have neuroscience elements and the work in neuroscience contains little or no artificial intelligence elements. Anyway, I need no recognition.

Why should someone join your project? Why is this work important?

If someone is interested in artificial intelligence it might help them having a different view on the subject and seeing what components are being developed over time. I can not tell how important is this for someone else. On a personal level, I can say that because my work is important to me and by having an AI system I will be able to get answers to many questions, I am working on that. Artificial Intelligence will provide exceptional benefits to the entire society.

What should someone do who is interested in joining the simulation? What can someone do if they can’t participate directly? (Is there a “write-your-congressman” sort of task they could help you with?)

If someone is interested in joining the project they need to download the Boinc client from the http://boinc.berkeley.edu site and then attach to the project using the master Url for this project, http://www.intelligencerealm.com/aisystem. We appreciate the support received from thousands of volunteers from all over the world.

If someone can’t participate directly I suggest to him/her to keep an open mind about what AI is and how it can benefit them. He or she should also try to understand its pitfalls.

There is no write-your-congressman type of task. Mass education is key for AI success. This project doesn’t need to be in the spotlight.

What is the latest news?

We reached 14,000 computers and we simulated over 740 billion neurons. We are working on implementing a basic hippocampal model for learning and memory.

Anything else you want to tell us?

If someone considers the development of artificial intelligence impossible or too far into the future to care about, I can only tell him or her, “Embrace the inevitable”. The advances in the field of neuroscience are increasing rapidly. Scientists are thorough.

Understanding its benefits and pitfalls is all that is needed.

Thank you for your time and we look forward to covering Intelligence Realm as it develops further.

Thank you for having me.

In an important step forward for acknowledging the possibility of real AI in our immediate future, a report by the UK government that says robots will have the same rights and responsibilities as human citizens. The Financial Times reports:

The next time you beat your keyboard in frustration, think of a day when it may be able to sue you for assault. Within 50 years we might even find ourselves standing next to the next generation of vacuum cleaners in the voting booth. Far from being extracts from the extreme end of science fiction, the idea that we may one day give sentient machines the kind of rights traditionally reserved for humans is raised in a British government-commissioned report which claims to be an extensive look into the future. Visions of the status of robots around 2056 have emerged from one of 270 forward-looking papers sponsored by Sir David King, the UK government’s chief scientist.

The paper covering robots’ rights was written by a UK partnership of Outsights, the management consultancy, and Ipsos Mori, the opinion research organisation. “If we make conscious robots they would want to have rights and they probably should,” said Henrik Christensen, director of the Centre of Robotics and Intelligent Machines at the Georgia Institute of Technology. The idea will not surprise science fiction aficionados.

It was widely explored by Dr Isaac Asimov, one of the foremost science fiction writers of the 20th century. He wrote of a society where robots were fully integrated and essential in day-to-day life.In his system, the ‘three laws of robotics’ governed machine life. They decreed that robots could not injure humans, must obey orders and protect their own existence – in that order.

Robots and machines are now classed as inanimate objects without rights or duties but if artificial intelligence becomes ubiquitous, the report argues, there may be calls for humans’ rights to be extended to them.It is also logical that such rights are meted out with citizens’ duties, including voting, paying tax and compulsory military service.

Mr Christensen said: “Would it be acceptable to kick a robotic dog even though we shouldn’t kick a normal one? There will be people who can’t distinguish that so we need to have ethical rules to make sure we as humans interact with robots in an ethical manner so we do not move our boundaries of what is acceptable.”

The Horizon Scan report argues that if ‘correctly managed’, this new world of robots’ rights could lead to increased labour output and greater prosperity. “If granted full rights, states will be obligated to provide full social benefits to them including income support, housing and possibly robo-healthcare to fix the machines over time,” it says.

But it points out that the process has casualties and the first one may be the environment, especially in the areas of energy and waste.

Human-level AI could be invented within 50 years, if not much sooner. Our supercomputers are already approaching the computing power of the human brain, and the software end of things is starting to progress steadily. It’s time for us to start thinking about AI as a positive and negative factor in global risk.