Toggle light / dark theme

With our growing resources, the Lifeboat Foundation has teamed with the Singularity Hub as Media Sponsors for the 2010 Humanity+ Summit. If you have suggestions on future events that we should sponsor, please contact [email protected].

The summer 2010 “Humanity+ @ Harvard — The Rise Of The Citizen Scientist” conference is being held, after the inaugural conference in Los Angeles in December 2009, on the East Coast, at Harvard University’s prestigious Science Hall on June 12–13. Futurist, inventor, and author of the NYT bestselling book “The Singularity Is Near”, Ray Kurzweil is going to be keynote speaker of the conference.

Also speaking at the H+ Summit @ Harvard is Aubrey de Grey, a biomedical gerontologist based in Cambridge, UK, and is the Chief Science Officer of SENS Foundation, a California-based charity dedicated to combating the aging process. His talk, “Hype and anti-hype in academic biogerontology research: a call to action”, will analyze the interplay of over-pessimistic and over-optimistic positions with regards of research and development of cures, and propose solutions to alleviate the negative effects of both.

The theme is “The Rise Of The Citizen Scientist”, as illustrated in his talk by Alex Lightman, Executive Director of Humanity+:

“Knowledge may be expanding exponentially, but the current rate of civilizational learning and institutional upgrading is still far too slow in the century of peak oil, peak uranium, and ‘peak everything’. Humanity needs to gather vastly more data as part of ever larger and more widespread scientific experiments, and make science and technology flourish in streets, fields, and homes as well as in university and corporate laboratories.”

Humanity+ Summit @ Harvard is an unmissable event for everyone who is interested in the evolution of the rapidly changing human condition, and the impact of accelerating technological change on the daily lives of individuals, and on our society as a whole. Tickets start at only $150, with an additional 50% discount for students registering with the coupon STUDENTDISCOUNT (valid student ID required at the time of admission).

With over 40 speakers, and 50 sessions in two jam packed days, the attendees, and the speakers will have many opportunities to interact, and discuss, complementing the conference with the necessary networking component.

Other speakers already listed on the H+ Summit program page include:

  • David Orban, Chairman of Humanity+: “Intelligence Augmentation, Decision Power, And The Emerging Data Sphere”
  • Heather Knight, CTO of Humanity+: “Why Robots Need to Spend More Time in the Limelight”
  • Andrew Hessel, Co-Chair at Singularity University: “Altered Carbon: The Emerging Biological Diamond Age”
  • M. A. Greenstein, Art Center College of Design: “Sparking our Neural Humanity with Neurotech!”
  • Michael Smolens, CEO of dotSUB: “Removing language as a barrier to cross cultural communication”

New speakers will be announced in rapid succession, rounding out a schedule that is guaranteed to inform, intrigue, stimulate and provoke, in moving ahead our planetary understanding of the evolution of the human condition!

H+ Summit @ Harvard — The Rise Of The Citizen Scientist
June 12–13, Harvard University
Cambridge, MA

You can register at http://www.eventbrite.com/event/648806598/friendsofhplus/4141206940.

AI is our best hope for long term survival. If we fail to create it, it will happened by some reason. Here I suggest the complete list of possible causes of failure, but I do not believe in them. (I was inspired bu V.Vinge artile “What if singularity does not happen”?)

I think most of these points are wrong and AI finaly will be created.

Technical reasons:
1) Moore’s Law will stop by physical causes earlier than would be established sufficiently powerful and inexpensive apparatus for artificial intelligence.
2) Silicon processors are less efficient than neurons to create artificial intelligence.
3) Solution of the AI cannot be algorithmically parallelization and as a result of the AI will be extremely slow.

Philosophy:
4) Human beings use some method of processing information, essentially inaccessible to algorithmic computers. So Penrose believes. (But we can use this method using bioengineering techniques.) Generally, the final recognition of the impossibility of creating artificial intelligence would be tantamount to recognizing the existence of the soul.
5) The system cannot create a system more complex then themselves, and so the people cannot create artificial intelligence, since all the proposed solutions are too simple. That is, AI is in principle possible, but people are too stupid to do it. In fact, one reason for past failures in the creation of artificial intelligence is that people underestimate the complexity of the problem.
6) AI is impossible, because any sufficiently complex system reveals the meaninglessness of existence and stops.
7) All possible ways to optimize are exhausted.AI does not have any fundamental advantage in comparison with the human-machine interface and has a limited scope of use.
8. The man in the body has a maximum level of common sense, and any incorporeal AIs are or ineffective, or are the models of people.
9) AI is created, but has no problems, which he could and should be addressed. All the problems have been solved by conventional methods, or proven uncomputable.
10) AI is created, but not capable of recursive self-optimization, since this would require some radically new ideas, but they had not. As a result, AI is there, or as a curiosity, or as a limited specific applications, such as automatic drivers.
11) The idea of artificial intelligence is flawed, because it has no precise definition or even it is an oxymoron, like “artificial natural.” As a result, developing specific goals or to create models of man, but not universal artificial intelligence.
12) There is an upper limit of the complexity of systems for which they have become chaotic and unstable, and it slightly exceeds the intellect of the most intelligent people. AI is slowly coming to this threshold of complexity.
13) The bearer of intelligence is Qualia. For our level of intelligence should be a lot events that are indescribable and not knowable, but superintellect should understand them, by definition, otherwise it is not superintellect, but simply a quick intellect.

Economic:
14) The growth of computer programs has led to an increase in the number of failures that were so spectacular that of automation software had to be abandoned. This led to a drop in demand for powerful computers and stop Moore’s Law, before it reached its physical limits. The same increase in complexity and number of failures made it difficult the creation of AI.
15) AI is possible, but it does not give a significant advantage over the man in any sense of the results, nor speed, nor the cost of computing. For example, a simulation of human worth one billion dollars, and she has no idea how a to self-optimize. But people found ways to break up their intellectual abilities by injecting the stem cell precursors of neurons, which further increases the competitive advantage of people.
16) No person engaged in the development of AI, because it is considered that this is impossible. It turns out self-fulfilling prophecy. AI is engaged only by fricks, who do not have enough of their own intellect and money. But the scale of the Manhattan Project could solve the problem of AI, but just no one is taking.
17) Technology of uploading consciousness into a computer has so developed, that this is enough for all practical purposes, have been associated with AI, and therefore there is no need to create an algorithmic AI. This upload is done mechanically, through scanning, and still no one understands what happens in the brain.

Political:
18) AI systems are prohibited or severely restricted for ethical reasons, so that people still feel themselves above all. Perhaps are allowed specialized AI systems in military and aerospace.
19) AI is prohibited for safety reasons, as it represents too great global risk.
20) AI emerged and established his authority over the Earth, but does not show itself, except it does not allow others to develop their own AI projects.
21) AI did not appear as was is imagined, and therefore no one call it AI (eg, the distributed intelligence of social networks).

Artificial brain ’10 years away’

By Jonathan Fildes
Technology reporter, BBC News, Oxford

A detailed, functional artificial human brain can be built within the next 10 years, a leading scientist has claimed.

Henry Markram, director of the Blue Brain Project, has already simulated elements of a rat brain.

He told the TED Global conference in Oxford that a synthetic human brain would be of particular use finding treatments for mental illnesses.

Around two billion people are thought to suffer some kind of brain impairment, he said.

“It is not impossible to build a human brain and we can do it in 10 years,” he said.

“And if we do succeed, we will send a hologram to TED to talk.”

‘Shared fabric’

The Blue Brain project was launched in 2005 and aims to reverse engineer the mammalian brain from laboratory data.

In particular, his team has focused on the neocortical column — repetitive units of the mammalian brain known as the neocortex.

Neurons

The team are trying to reverse engineer the brain

“It’s a new brain,” he explained. “The mammals needed it because they had to cope with parenthood, social interactions complex cognitive functions.

“It was so successful an evolution from mouse to man it expanded about a thousand fold in terms of the numbers of units to produce this almost frightening organ.”

And that evolution continues, he said. “It is evolving at an enormous speed.”

Over the last 15 years, Professor Markram and his team have picked apart the structure of the neocortical column.

“It’s a bit like going and cataloguing a bit of the rainforest — how may trees does it have, what shape are the trees, how many of each type of tree do we have, what is the position of the trees,” he said.

“But it is a bit more than cataloguing because you have to describe and discover all the rules of communication, the rules of connectivity.”

The project now has a software model of “tens of thousands” of neurons — each one of which is different — which has allowed them to digitally construct an artificial neocortical column.

Although each neuron is unique, the team has found the patterns of circuitry in different brains have common patterns.

“Even though your brain may be smaller, bigger, may have different morphologies of neurons — we do actually share the same fabric,” he said.

“And we think this is species specific, which could explain why we can’t communicate across species.”

World view

To make the model come alive, the team feeds the models and a few algorithms into a supercomputer.

“You need one laptop to do all the calculations for one neuron,” he said. “So you need ten thousand laptops.”

Computer-generated image of a human brain

The research could give insights into brain disease

Instead, he uses an IBM Blue Gene machine with 10,000 processors.

Simulations have started to give the researchers clues about how the brain works.

For example, they can show the brain a picture — say, of a flower — and follow the electrical activity in the machine.

“You excite the system and it actually creates its own representation,” he said.

Ultimately, the aim would be to extract that representation and project it so that researchers could see directly how a brain perceives the world.

But as well as advancing neuroscience and philosophy, the Blue Brain project has other practical applications.

For example, by pooling all the world’s neuroscience data on animals — to create a “Noah’s Ark”, researchers may be able to build animal models.

“We cannot keep on doing animal experiments forever,” said Professor Markram.

It may also give researchers new insights into diseases of the brain.

“There are two billion people on the planet affected by mental disorder,” he told the audience.

The project may give insights into new treatments, he said.

The TED Global conference runs from 21 to 24 July in Oxford, UK.


It will probably come as a surprise to those who are not well acquainted with the life and work of Alan Turing that in addition to his renowned pioneering work in computer science and mathematics, he also helped to lay the groundwork in the field of mathematical biology(1). Why would a renowned mathematician and computer scientist find himself drawn to the biosciences?

Interestingly, it appears that Turing’s fascination with this sub-discipline of biology most probably stemmed from the same source as the one that inspired his better known research: at that time all of these fields of knowledge were in a state of flux and development, and all posed challenging fundamental questions. Furthermore, in each of the three disciplines that engaged his interest, the matters to which he applied his uniquely creative vision were directly connected to central questions underlying these disciplines, and indeed to deeper and broader philosophical questions into the nature of humanity, intelligence and the role played by evolution in shaping who we are and how we shape our world.

Central to Turing’s biological work was his interest in mechanisms that shape the development of form and pattern in autonomous biological systems, and which underlie the patterns we see in nature (2), from animal coat markings to leaf arrangement patterns on plant stems (phyllotaxis). This topic of research, which he named “morphogenesis,” (3) had not been previously studied with modeling tools. This was a knowledge gap that beckoned Turing; particularly as such methods of research came naturally to him.

In addition to the diverse reasons that attracted him to the field of pattern formation, a major ulterior motive for his research had to do with a contentious subject which, astonishingly, is still highly controversial in some countries to this day. In studying pattern formation he was seeking to help invalidate the “argument from design(4) concept, which we know today as the hypothesis of “Intelligent Design.

Turing was intent on demonstrating that the laws of physics are sufficient to explain our observations in the natural world; or in other words, that our findings do not need an omnipotent creator to explain them. It is ironic that Turing, whose work played a central role in laying the groundwork for the creation of Artificial Intelligence (AI), took a clear stance against creationism. This is testament to his acceptance of scientific evidence and rigorous research over weak analogy.

Unfortunately, those who did not and will not accept Darwinian natural selection as the mechanism of evolution will not see anything compelling in Turing’s work on morphogenesis. To those individuals, the development of AI can be taken as “proof,” or a convincing analogy, of the necessity and presence of a creator, the argument being that the Creator created humanity, and humanity creates AI.

However, what the supporters of intelligent design do not acknowledge is that natural selection is itself precisely the cause underlying the development of both humanity and its AI progeny. Just as natural selection resulted in the phenomena that Turing sought to model in his work on morphogenesis (which brings about the propagation of successful traits through the development of biological form and pattern), it is also the driver for the development of intelligence. Itself generated via internalized neuronal selection mechanisms (5, 6), intelligence allows organisms to adapt to their environment continually during life.

Intelligence is the ultimate tool, the development of which allows organisms to survive; it enables them to learn, respond to their environment and adapt their behavior within their own lifetime. It is the fruit of the natural process that brings about successive development over time in organisms faced with scarcity of resources. Moreover, it now allows humans to defy generational selection and develop intelligences external to our own, making use of computational techniques, including some which utilize evolutionary mechanisms (7).

The eventual development of true AI will be a landmark in many ways, notably in that these intelligences will have the ability to alter their own circuits (their version of neurons), immediately and at will. While the human body is capable of some degree of non-developmental neuronal plasticity, this takes place slowly and control of the process is limited to indirect mechanisms (such as varied forms of learning or stimulation). In contrast, the high plasticity and directly controlled design and structure of AI software and hardware will render them well suited to altering themselves and hence to developing improved subsequent AI generations.

In addition to a jump in the degree of plasticity and its control, AIs will constitute a further step forward with regard to the speed at which beneficial information can be shared. In contrast to the exceedingly slow rate at which advantageous evolutionary adaptations were spread through the populations observed by Darwin (over several generations), the rapidly increasing rates of communication in current society result in successful “adaptations” (which we call science and technology) being distributed at ever-increasing speeds. This is, of course, the principal reason why information sharing is beneficial for humans – it allows us to better adapt to reality and harness the environment to our advantage. It seems reasonable to predict that ultimately the sharing of information in AI will be practically instantaneous.

It is difficult to speculate what a combination of such rapid communication and high plasticity combined with ever-increasing processing speeds will be like. The point at which self-improving AIs emerge has been termed a technological singularity (8).

Thus, in summary: evolution begets intelligence (via evolutionary neuronal selection mechanisms); human intelligence begets artificial intelligence (using, among others, evolutionary computation methods), which at increasing cycle speeds, leads to a technological singularity – a further big step up the evolutionary ladder.

Sadly, being considerably ahead of his time and living in an environment that castigated his lifestyle and drove him from his research, meant that Turing did not live to see the full extent of his work’s influence. While he did not survive to an age in which AIs became prevalent, he did fulfill his ambition by taking part in the defeat of argument from design in the scientific community, and witnessed Darwinian natural selection becoming widely accepted. The breadth of his vision, the insight he displayed, and his groundbreaking research clearly place Turing on an equal footing with the most celebrated scientists of the previous century.

The link is:
http://www.msnbc.msn.com/id/31511398/ns/us_news-military/

“The low-key launch of the new military unit reflects the Pentagon’s fear that the military might be seen as taking control over the nation’s computer networks.”

“Creation of the command, said Deputy Defense Secretary William Lynn at a recent meeting of cyber experts, ‘will not represent the militarization of cyberspace.’”

And where is our lifeboat?

An unmanned beast that cruises over any terrain at speeds that leave an M1A Abrams in the dust

Mean Machine: Troops could use the Ripsaw as an advance scout, sending it a mile or two ahead of a convoy, and use its cameras and new sensor technology to sniff out roadside bombs or ambushes John B. Carnett

Today’s featured Invention Award winner really requires no justification–it’s an unmanned, armed tank faster than anything the US Army has. Behold, the Ripsaw.

Cue up the Ripsaw’s greatest hits on YouTube, and you can watch the unmanned tank tear across muddy fields at 60 mph, jump 50 feet, and crush birch trees. But right now, as its remote driver inches it back and forth for a photo shoot, it’s like watching Babe Ruth forced to bunt with the bases loaded. The Ripsaw, lurching and belching black puffs of smoke, somehow seems restless.

Like their creation, identical twins Geoff and Mike Howe, 34, don’t like to sit still for long. At age seven, they built a log cabin. Ten years later, they converted a school bus into a drivable, transforming stage for their heavy-metal band, Two Much Trouble. In 2000 they couldn’t agree on their next project: Geoff favored a jet-turbine-powered off-road truck; Mike, the world’s fastest tracked vehicle. “That weekend, Mike calls me down to his garage,” Geoff says. “He’s already got the suspension built for the Ripsaw. So we went with that.”

Every engineer they consulted said they couldn’t best the 42mph top speed of an M1A Abrams, the most powerful tank in the world. Other tanks are built to protect the people inside, with frames made of heavy armored-steel plates. Designed for rugged unmanned missions, the Ripsaw just needed to go fast, so the brothers started trimming weight. First they built a frame of welded steel tubes, like the ones used by Nascar, that provides 50 percent more strength at half the weight.

Ripsaw: How It Works: To glide over rough terrain at top speed, the Ripsaw has shock absorbers that provide 14 inches of travel. But when the suspension compresses, it creates slack that could cause a track to come off, potentially flipping the vehicle. So the inventors devised a spring-loaded wheel at the front that extends to keep the tracks taut. The Ripsaw has never thrown a track Bland Designs

Behind the Wheel: The Ripsaw’s six cameras send live, 360-degree video to a control room, where program manager Will McMaster steers the tank John B. Carnett

When you reinvent the tank, finding ready-made parts is no easy task, and a tread light enough to spin at 60 mph and strong enough to hold together at that speed didn’t exist. So the Howes hand-shaped steel cleats and redesigned the mechanism for connecting them in a track. (Because the patent for the mechanism, one of eight on Ripsaw components, is still pending, they will reveal only that they didn’t use the typical pin-and-bushing system of connecting treads.) The two-pound cleats weigh about 90 percent less than similarly scaled tank cleats. With the combined weight savings, the Ripsaw’s 650-horsepower V8 engine cranks out nine times as much horsepower per pound as an M1A Abrams.

While working their day jobs — Mike as a financial adviser, Geoff as a foreman at a utilities plant — the self-taught engineers hauled the Ripsaw prototype from their workshop in Maine to the 2005 Washington Auto Show, where they showed it to army officials interested in developing weaponized unmanned ground vehicles (UGVs). That led to a demonstration for Maine Senator Susan Collins, who helped the Howes secure $1.25 million from the Department of Defense.The brothers founded Howe and Howe Technologies in 2006 and set to work upgrading various Ripsaw systems, including a differential drive train that automatically doles out the right amount of power to each track for turns. The following year they handed it over to the Army’s Armament Research Development and Engineering Center (ARDEC), which paired it with a remote-control M240 machine gun and put the entire system through months of strenuous tests. “What really set it apart from other UGVs was its speed,” says Bhavanjot Singh, the ARDEC project manager overseeing the Ripsaw’s development. Other UGVs top out at around 20 mph, but the Ripsaw can keep up with a pack of Humvees.

Over the Hill: Despite the best efforts of inventors Mike [left] and Geoff Howe, the Ripsaw has proven unbreakable. It did once break a suspension mount — and drove on for hours without trouble John B. Carnett

Back on the field, the tank has been readied for the photo. The program manager for Howe and Howe Technologies, Will McMaster, who is sitting at the Ripsaw’s controls around the corner and roughly a football field away, drives it straight over a three-foot-tall concrete wall. The brothers think that when the $760,000 Ripsaw is ready for mass production this summer, feats like this will give them a lead over other companies vying for a military UGV contract. “Every other UGV is small and uses [artificial intelligence] to avoid obstacles,” Mike says. “The Ripsaw doesn’t have to avoid obstacles; it drives over them.“

Singularity Hub

Create an AI on Your Computer

Written on May 28, 2009 – 11:48 am | by Aaron Saenz |

If many hands make light work, then maybe many computers can make an artificial brain. That’s the basic reasoning behind Intelligence Realm’s Artificial Intelligence project. By reverse engineering the brain through a simulation spread out over many different personal computers, Intelligence Realm hopes to create an AI from the ground-up, one neuron at a time. The first waves of simulation are already proving successful, with over 14,000 computers used and 740 billion neurons modeled. Singularity Hub managed to snag the project’s leader, Ovidiu Anghelidi, for an interview: see the full text at the end of this article.

The ultimate goal of Intelligence Realm is to create an AI or multiple AIs, and use these intelligences in scientific endeavors. By focusing on the human brain as a prototype, they can create an intelligence that solves problems and “thinks” like a human. This is akin to the work done at FACETS that Singularity Hub highlighted some weeks ago. The largest difference between Intelligence Realm and FACETS is that Intelligence Realm is relying on a purely simulated/software approach.

Which sort of makes Intelligence Realm similar to the Blue Brain Project that Singularity Hub also discussed. Both are computer simulations of neurons in the brain, but Blue Brain’s ultimate goal is to better understand neurological functions, while Intelligence Realm is seeking to eventually create an AI. In either case, to successfully simulate the brain in software alone, you need a lot of computing power. Blue Brain runs off a high-tech supercomputer, a resource that’s pretty much exclusive to that project. Even with that impressive commodity, Blue Brain is hitting the limit of what it can simulate. There’s too much to model for just one computer alone, no matter how powerful. Intelligence Realm is using a distributed computing solution. Where one computer cluster alone may fail, many working together may succeed. Which is why Intelligence Realm is looking for help.

The AI system project is actively recruiting, with more than 6700 volunteers answering the call. Each volunteer runs a small portion of the larger simulation on their computer(s) and then ships the results back to the main server. BOINC, the Berkeley built distributed computing software that makes it all possible, manages the flow of data back and forth. It’s the same software used for SETI’s distributed computing processing. Joining the project is pretty simple: you just download BOINC, some other data files, and you’re good to go. You can run the simulation as an application, or as part of your screen saver.

Baby Steps

So, 6700 volunteers, 14,000 or so platforms, 740 billion neurons, but what is the simulated brain actually thinking? Not a lot at the moment. The same is true with the Blue Brain Project, or FACETS. Simulating a complex organ like the brain is a slow process, and the first steps are focused on understanding how the thing actually works. Inputs (Intelligence Realm is using text strings) are converted into neuronal signals, those signals are allowed to interact in the simulation and the end state is converted back to an output. It’s a time and labor (computation) intensive process. Right now, Intelligence Realm is just building towards simple arithmetic.

Which is definitely a baby step, but there are more steps ahead. Intelligence Realm plans on learning how to map numbers to neurons, understanding the kind of patterns of neurons in your brain that represent numbers, and figuring out basic mathematical operators (addition, subtraction, etc). From these humble beginnings, more complex reasoning will emerge. At least, that’s the plan.

Intelligence Realm isn’t just building some sort of biophysical calculator. Their brain is being designed so that it can change and grow, just like a human brain. They’ve focused on simulating all parts of the brain (including the lower reasoning sections) and increasing the plasticity of their model. Right now it’s stumbling towards knowing 1+1 = 2. Even with linear growth they hope that this same stumbling intelligence will evolve into a mental giant. It’s a monumental task, though, and there’s no guarantee it will work. Building artificial intelligence is probably one of the most difficult tasks to undertake, and this early in the game, it’s hard to see if the baby steps will develop into adult strides. The simulation process may not even be the right approach. It’s a valuable experiment for what it can teach us about the brain, but it may never create an AI. A larger question may be, do we want it to?

Knock, Knock…It’s Inevitability

With the newest Terminator movie out, it’s only natural to start worrying about the dangers of artificial intelligence again. Why build these things if they’re just going to hunt down Christian Bale? For many, the threats of artificial intelligence make it seem like an effort of self-destructive curiosity. After all, from Shelley’s Frankenstein Monster to Adam and Eve, Western civilization seems to believe that creations always end up turning on their creators.

AI, however, promises rewards as well as threats. Problems in chemistry, biology, physics, economics, engineering, and astronomy, even questions of philosophy could all be helped by the application of an advanced AI. What’s more, as we seek to upgrade ourselves through cybernetics and genetic engineering, we will become more artificial. In the end, the line between artificial and natural intelligence may be blurred to a point that AIs will seem like our equals, not our eventual oppressors. However, that’s not a path that everyone will necessarily want to walk down.

Will AI and Humans learn to co-exist?

Will AI and Humans learn to co-exist?

The nature of distributed computing and BOINC allow you to effectively vote on whether or not this project will succeed. Intelligence Realm will eventually need hundred of thousands if not millions of computing platforms to run their simulations. If you believe that AI deserves a chance to exist, give them a hand and recruit others. If you think we’re building our own destroyers, then don’t run the program. In the end, the success or failure of this project may very well depend on how many volunteers are willing to serve as mid-wives to a new form of intelligence.

Before you make your decision though, make sure to read the following interview. As project leader, Ovidiu Anghelidi is one of the driving minds behind reverse engineering the brain and developing the eventual AI that Intelligence Realm hopes to build. He’s didn’t mean for this to be a recruiting speech, but he makes some good points:

SH: Hello. Could you please start by giving yourself and your project a brief introduction?

OA: Hi. My name is Ovidiu Anghelidi and I am working on a distributed computing project involving thousands of computers in the field of artificial intelligence. Our goal is to develop a system that can perform automated research.

What drew you to this project?

During my adolescence I tried understanding the nature of question. I used extensively questions as a learning tool. That drove me to search for better understanding methods. After looking at all kinds of methods, I kinda felt that understanding creativity is a worthier pursuit. Applying various methods of learning and understanding is a fine job, but finding outstanding solutions requires much more than that. For a short while I tried understanding how creativity is done and what exactly is it. I found out that there is not much work done on this subject, mainly because it is an overlapping concept. The search for creativity led me to the field of AI. Because one of the past presidents of the American Association of Artificial Intelligence dedicated an entire issue to this subject I started pursuing that direction. I looked into the field of artificial intelligence for a couple of years and at some point I was reading more and more papers that touched the subject of cognition and brain so I looked briefly into neuroscience. After I read an introductory book about neuroscience, I realized that understanding brain mechanisms is what I should have done all along, for the past 20 years. To this day I am pursuing this direction.

What’s your time table for success? How long till we have a distributed AI running around using your system?

I have been working on this project for about 3 years now, and I estimate that we will need another 7–8 years to finalize the project. Nonetheless we do not need that much time to be able to use some its features. I expect to have some basic features that work within a couple of months. Take for example the multiple simulations feature. If we want to pursue various directions in different fields (i.e. mathematics, biology, physics) we will need to set up a simulation for each field. But we do not need to get to the end of the project, to be able to run single simulations.

Do you think that Artificial Intelligence is a necessary step in the evolution of intelligence? If not, why pursue it? If so, does it have to happen at a given time?

I wouldn’t say necessary, because we don’t know what we are evolving towards. As long as we do not have the full picture from beginning to end, or cases from other species to compare our history to, we shouldn’t just assume that it is necessary.

We should pursue it with all our strength and understanding because soon enough it can give us a lot of answers about ourselves and this Universe. By soon I mean two or three decades. A very short time span, indeed. Artificial Intelligence will amplify a couple of orders of magnitude our research efforts across all disciplines.

In our case it is a natural extension. Any species that reaches a certain level of intelligence, at some point in time, they would start replicating and extending their natural capacities in order to control their environment. The human race did that for the last couple thousands of years, we tried to replicate and extend our capacity to run, see, smell and touch. Now it reached thinking. We invented vehicles, television sets, other devices and we are now close to have artificial intelligence.

What do you think are important short term and long term consequences of this project?

We hope that in short term we will create some awareness in regards to the benefits of artificial intelligence technology. Longer term it is hard to foresee.

How do you see Intelligence Realm interacting with more traditional research institutions? (Universities, peer reviewed Journals, etc)

Well…, we will not be able to provide full details about the entire project because we are pursuing a business model, so that we can support the project in the future, so there is little chance of a collaboration with a University or other research institution. Down the road, as we we will be in an advanced stage with the development, we will probably forge some collaborations. For the time being this doesn’t appear feasible. I am open to collaborations but I can’t see how that would happen.

I submitted some papers to a couple of journals in the past, but I usually receive suggestions that I should look at other journals, from other fields. Most of the work in artificial intelligence doesn’t have neuroscience elements and the work in neuroscience contains little or no artificial intelligence elements. Anyway, I need no recognition.

Why should someone join your project? Why is this work important?

If someone is interested in artificial intelligence it might help them having a different view on the subject and seeing what components are being developed over time. I can not tell how important is this for someone else. On a personal level, I can say that because my work is important to me and by having an AI system I will be able to get answers to many questions, I am working on that. Artificial Intelligence will provide exceptional benefits to the entire society.

What should someone do who is interested in joining the simulation? What can someone do if they can’t participate directly? (Is there a “write-your-congressman” sort of task they could help you with?)

If someone is interested in joining the project they need to download the Boinc client from the http://boinc.berkeley.edu site and then attach to the project using the master Url for this project, http://www.intelligencerealm.com/aisystem. We appreciate the support received from thousands of volunteers from all over the world.

If someone can’t participate directly I suggest to him/her to keep an open mind about what AI is and how it can benefit them. He or she should also try to understand its pitfalls.

There is no write-your-congressman type of task. Mass education is key for AI success. This project doesn’t need to be in the spotlight.

What is the latest news?

We reached 14,000 computers and we simulated over 740 billion neurons. We are working on implementing a basic hippocampal model for learning and memory.

Anything else you want to tell us?

If someone considers the development of artificial intelligence impossible or too far into the future to care about, I can only tell him or her, “Embrace the inevitable”. The advances in the field of neuroscience are increasing rapidly. Scientists are thorough.

Understanding its benefits and pitfalls is all that is needed.

Thank you for your time and we look forward to covering Intelligence Realm as it develops further.

Thank you for having me.

New Scientist

30 April 2009 by Michael Brooks

Yes, if we play our cards right — or wrong, depending on your perspective.

In engineering terms, it is easy to see qualitative similarities between the human brain and the internet’s complex network of nodes, as they both hold, process, recall and transmit information. “The internet behaves a fair bit like a mind,” says Ben Goertzel, chair of the Artificial General Intelligence Research Institute, an organisation inevitably based in cyberspace. “It might already have a degree of consciousness”.

Not that it will necessarily have the same kind of consciousness as humans: it is unlikely to be wondering who it is, for instance. To Francis Heylighen, who studies consciousness and artificial intelligence at the Free University of Brussels (VUB) in Belgium, consciousness is merely a system of mechanisms for making information processing more efficient by adding a level of control over which of the brain’s processes get the most resources. “Adding consciousness is more a matter of fine-tuning and increasing control… than a jump to a wholly different level,” Heylighen says.

How might this manifest itself? Heylighen speculates that it might turn the internet into a self-aware network that constantly strives to become better at what it does, reorganising itself and filling gaps in its own knowledge and abilities.

If it is not already semiconscious, we could do various things to help wake it up, such as requiring the net to monitor its own knowledge gaps and do something about them. It shouldn’t be something to fear, says Goertzel: “The outlook for humanity is probably better in the case that an emergent, coherent and purposeful internet mind develops.”

Heylighen agrees, but warns that we might find it a little disappointing. “We probably would not notice a whole lot of a difference, initially,” he says.

And when might this begin? According to Heylighen, it all depends on internet fashion trends. If the effort that has gone into developing social networking sites goes into developing internet consciousness, it could happen within a decade, he says.

March 12, 2009 10:00 AM PDT

Q&A: The robot wars have arrived

P.W. Singer

P.W. Singer

Just as the computer and ARPAnet evolved into the PC and Internet, robots are poised to integrate into everyday life in ways we can’t even imagine, thanks in large part to research funded by the U.S. military.

Many people are excited about the military’s newfound interest and funding of robotics, but few are considering its ramifications on war in general.

P.W. Singer, senior fellow and director of the 21st Century Defense Initiative at the Brookings Institution, went behind the scenes of the robotics world to write “Wired for War: The Robotics Revolution and Conflict in the 21st Century.”

Singer took time from his book tour to talk with CNET about the start of a revolution tech insiders predicted, but so many others missed.

Q: Your book is purposely not the typical think tank book. It’s filled with just as many humorous anecdotes about people’s personal lives and pop culture as it is with statistics, technology, and history. You say you did this because robotic development has been greatly influenced by the human imagination?
Singer: Look, to write on robots in my field is a risky thing. Robots were seen as this thing of science fiction even though they’re not. So I decided to double down, you know? If I was going to risk it in one way, why not in another way? It’s my own insurgency on the boring, staid way people talk about this incredibly important thing, which is war. Most of the books on war and its dynamics–to be blunt–are, oddly enough, boring. And it means the public doesn’t actually have an understanding of the dynamics as they should.

It seems like we’re just at the beginning here. You quote Bill Gates comparing robots now to what computers were in the eighties.
Singer: Yes, the military is a primary buyer right now and it’s using them (robots) for a limited set of applications. And yes, in each area we prove they can be utilized you’ll see a massive expansion. That’s all correct, but then I think it’s even beyond what he was saying. No one sitting back with a computer in 1980 said, “Oh, yes, these things are going to have a ripple effect on our society and politics such that there’s going to be a political debate about privacy in an online world, and mothers in Peoria are going to be concerned about child predators on this thing called Facebook.” It’ll be the same way with the impact on war and in robotics; a ripple effect in areas we’re not even aware of yet.

Right now, rudimentary as they are, we have autonomous and remote-controlled robots while most of the people we’re fighting don’t. What’s that doing to our image?
Singer: The leading newspaper editor in Lebanon described–and he’s actually describing this as there is a drone above him at the time–that these things show you’re afraid, you’re not man enough to fight us face-to-face, it shows your cowardice, all we have to do to defeat you is just kill a few of your soldiers.

It’s playing like cowardice?
Singer: Yeah, it’s like every revolution. You know, when gunpowder is first used people think that’s cowardly. Then they figure it out and it has all sorts of other ripple effects.

What’s war going to look like once robot warriors become autonomous and ubiquitous for both sides?
Singer: I think if we’re looking at the realm of science fiction, less so “Star Wars: The Clone Wars” and more so the world of “Blade Runner” where it’s this mix between incredible technologies, but also the dirt and grime of poverty in the city. I guess this shows where I come down on these issues. The future of war is more and more machines, but it’s still also insurgencies, terrorism, you name it.

What seems most likely in this scenario–at least in the near term–is this continuation of teams of robots and humans working together, each doing what they’re good at…Maybe the human as the quarterback and the robots as the players with the humans calling out plays, making decisions, and the robots carrying them out. However, just like on a football field, things change. The wide receivers can alter the play, and that seems to be where we’re headed.

How will robot warfare change our international laws of war? If an autonomous robot mistakenly takes out 20 little girls playing soccer in the street and people are outraged, is the programmer going to get the blame? The manufacturer? The commander who sent in the robot fleet?
Singer: That’s the essence of the problem of trying to apply a set of laws that are so old they qualify for Medicare to these kind of 21st-century dilemmas that come with this 21st-century technology. It’s also the kind of question that you might have once only asked at Comic-Con and now it’s a very real live question at the Pentagon.

I went around trying to get the answer to this sort of question meeting with people not only in the military but also in the International Committee of the Red Cross and Human Rights Watch. We’re at a loss as to how to answer that question right now. The robotics companies are only thinking in terms of product liability…and international law is simply overwhelmed or basically ignorant of this technology. There’s a great scene in the book where two senior leaders within Human Rights Watch get in an argument in front of me of which laws might be most useful in such a situation.

Is this where they bring up Star Trek?
Singer: Yeah, one’s bringing up the Geneva Conventions and the other one’s pointing to the Star Trek Prime Directive.

You say in your book that except for a few refusenicks, most scientists are definitely not subscribing to Isaac Asimov’s laws. What then generally are the ethics of these roboticists?
Singer: The people who are building these systems are excited by the possibilities of the technology. But the field of robotics, it’s a very young field. It’s not like medicine that has an ethical code. It’s not done what the field of genetics has, where it’s begun to wrestle with the ethics of what they’re working on and the ripple effects it has on the society. That’s not happening in the robotics field, except in isolated instances.

What military robotic tech is likely to migrate over to local law enforcement or the consumer world?
Singer: I think we’re already starting to see some of the early stages of that…I think this is the other part that Gates was saying: we get to the point where we stop calling them computers. You know, I have a computer in my pocket right now. It’s a cell phone. I just don’t call it a computer. The new Lexus parallel-parks itself. Do we call it a robot car? No, but it’s kind of doing something robotic.

You know, I’m the guy coming out of the world of political science, so it opens up these fun debates. Take the question of ethics and robots. How about me? Is it my second amendment right to have a gun-armed robot? I mean, I’m not hiring my own gun robots, but Homeland Security is already flying drones, and police departments are already purchasing them.

Explain how robotic warfare is “open source” warfare.
Singer: It’s much like what’s happened in the software industry going open source, the idea that this technology is not something that requires a massive industrial structure to build. Much like open source software, not only can almost anyone access it, but also anyone with an entrepreneurial spirit, and in this case of very wicked entrepreneurial spirit, can improve upon it. All sorts of actors, not just high-end military, can access high-end military technologies…Hezbollah is not a state. However, Hezbollah flew four drones at Israel. Take this down to the individual level and I think one of the darkest quotes comes from the DARPA scientist who said, and I quote, “For $50,000 I could shut down Manhattan.” The potential of an al-Qaeda 2.0 is made far more lethal with these technologies, but also the next generation of a Timothy McVeigh or Unabomber is multiplying their capability with these technologies.

The U.S. military said in a statement this week that it plans to pull 12,000 troops out of Iraq by the fall. Do you think robots will have a hand in helping to get to that number?
Singer: Most definitely.

How?
Singer: The utilization of the Predator operations is allowing us to accomplish certain goals there without troops on the grounds.

Is this going to lead to more of what you call the cubicle warriors or the armchair warriors? They’re in the U.S. operating on this end, and then going to their kid’s PTA meeting at the end of the day?
Singer: Oh, most definitely. Look, the Air Force this year is putting out more unmanned pilots that manned pilots.

Explain how soldiers now come ready-trained because of our video games.
Singer: The military is very smartly free-riding off of the video game industry, off the designs in terms of the human interface, using the Xbox controllers, PlayStation controllers. The Microsofts and Sonys of the world have spent millions designing the system that fits perfectly in your hand. Why not use it? They’re also free-riding off this entire generation that’s come in already trained in the use of these systems.

There’s another aspect though, which is the mentality people bring to bear when using these systems. It really struck me when one of the people involved in Predator operations described what it was like to take out an enemy from afar, what it was like to kill. He said, “It’s like a video game.” That’s a very odd reference, but also a telling reference for this experience of killing and how it’s changing in our generation.

It’s making them more removed from the morality of it?
Singer: It’s the fundamental difference between the bomber pilots of WWII and even the bomber pilots of today. It’s disconnection from risk on both a physical and psychological plain.

When my grandfather went to war in the Pacific, he went to a place where there was such danger he might not ever come home again. You compare that to the drone pilot experience. Not only what it’s like to kill, but the whole experience of going to war is getting up, getting into their Toyota Corolla, going in to work, killing enemy combatants from afar, getting in their car, and driving home. So 20 minutes after being at war, they’re back at home and talking to their kid about their homework at the dinner table. So this whole meaning of the term “going to war” that’s held true for 5,000 years is changing.

What do you think is the most dangerous military robot out there now?
Singer: It all hinges on the definition of the term dangerous. The system that’s been most incredibly lethal in terms of consequences on the battlefield so far if you ask military commanders is the Predator. They describe it as the most useful system, manned or unmanned, in our operations in Afghanistan and Iraq. Eleven out of the twenty al-Qaeda leaders we’ve gotten, we’ve gotten via a drone strike. Now, dangerous can have other meanings. The work on evolutionary software scares the shit out of me.

You’re saying we’re gonna get to a HAL situation?
Singer: Maybe it’s just cause I’ve grown up on a diet of all that sci-fi, but the evolutionary software stuff does spook me out a little bit. Oh, and robots that can replicate themselves. We’re not there yet, but that’s another like “whoa!”

People have finally got the attention of companies and governments to look ahead to 2020, 2040, 2050 in terms of the environment and green technology. But as you said in your book, that’s not happening with robotics issues. Why do you think that is?
Singer: When it comes to the issue of war, we’re exceptionally uncomfortable looking forward, mainly because so many people have gotten it so wrong. People in policymaker positions, policy adviser positions, and the people making the decisions are woefully ignorant in what’s happening in technology not only five years from now, not only now, but where we were five years ago. You have people describing robotics as “mere science fiction” when we’re talking about having already 12,000 (robots) on the ground, 7,000 in the air. During this book tour, I was in this meeting with a very senior Pentagon adviser, top of the field, very big name. He said, “Yeah this technology stuff is so amazing. I bet one day we’ll have this technology where like one day the Internet will be able to look like a video game, and it will be three-dimensional, I’ll bet.”

(laughing) And meanwhile, your wife’s at Linden Labs.
Singer: (laughing) Yeah, it’s Second Life. And that’s not anything new.

At least five years old, yeah.
Singer: And you don’t have to be a technology person to be aware of it. I mean, it’s been covered by CNN. It appeared on “The Office” and “CSI.” You just have to be aware of pop culture to know. And so it was this thing that he was describing as it might happen one day, and it happened five years ago. Then the people that do work on the technology and are aware of it, they tend to either be: head-in-the-sand in terms of “I’m just working on my thing, I don’t care about the effects of it”; or “I’m optimistic. Oh these systems are great. They’re only gonna work out for the best.” They forget that this is a real world. They’re kind of like the atomic scientists.

Obviously the hope is that robots will do all the dirty work of warfare. But warfare is inherently messy, unpredictable, and often worse than expectations. How would a roboticized war be any different in that respect?
Singer: In no way. That’s the fundamental argument of the book. While we may have Moore’s Law in place, we still haven’t gotten rid of Murphy’s Law. So we have a technology that is giving us incredible capabilities that we couldn’t even have imagined a few years ago, let alone had in place. But the fog of war is not being lifted as Rumsfeld once claimed absurdly.

You may be getting new technological capabilities, but you are also creating new human dilemmas. And it’s those dilemmas that are really the revolutionary aspect of this. What are the laws that surround this and how do you insure accountability in this setting? At what point do we have to become concerned about our weapons becoming a threat to ourselves? This future of war is again a mix of more and more machines being used to fight, but the wars themselves are still about our human realities. They’re still driven by our human failings, and the ripple effects are still because of our human politics, our human laws. And it’s the cross between the two that we have to understand.

Candace Lombardi is a journalist who divides her time between the U.S. and the U.K. Whether it’s cars, robots, personal gadgets, or industrial machines, she enjoys examining the moving parts that keep our world rotating. Email her at [email protected]. She is a member of the CNET Blog Network and is not a current employee of CNET.

Jetfuel powerpack, armour… shoulder turret?

Free whitepaper – Data center projects: standardized process

US weaponry globocorp Lockheed is pleased to announce the unveiling of its newly-acquired powered exoskeleton intended to confer superhuman strength and endurance upon US soldiers.

Needless to say, corporate promo vid of the Human Universal Load Carrier (HULC™) is available:

The exoskeleton is based on a design from Berkeley Bionics of California, but Lockheed say they have brought significant pimpage to the basic HULC. The enhanced version is now on show at the Association of the United States’ Army Winter Symposium in Florida.

“With our enhancements to the HULC system, Soldiers will be able to carry loads up to 200 pounds with minimal effort,” according to Lockheed’s Rich Russell.

From the vid, the HULC certainly seems a step forward on Raytheon’s rival XOS mechwarrior suit, which at last report still trails an inconvenient power cable to the nearest wall socket.

Not so the HULC; four pounds of lithium polymer batteries will run the exoskeleton for an hour walking at 3mph, according to Lockheed. Speed marching at up to 7mph reduces this somewhat; a battery-draining “burst” at 10mph is the maximum speed.

The user can hump 200lb with relative ease while marching in a HULC, however, well in excess of even the heaviest combat loads normally carried by modern infantry. There’d be scope to carry a few spare batteries. Even if the machine runs out of juice, Lockheed claims that its reinforcement and shock absorption still helps with load carrying rather than hindering.

There are various optional extras, too. The HULC can be fitted with armour plating, heating or cooling systems, sensors and “other custom attachments”. We particularly liked that last one: our personal request would be a powered gun or missile mount of some kind above the shoulder, linked to a helmet or monocle laser sight.

One does note that remote-controlled gun mounts weighing as little as 55lb are available, able to handle various kinds of normally tripod- or bipod-mounted heavy weapons.

You’d need more power, but that’s on offer. According to the Lockheed spec sheet (pdf) there’s an extended-endurance HULC fitted with a “silent” generator running on JP8 jet fuel. A tankful will run this suit for three days, marching eight hours per day — though presumably at the cost of some payload.

Doubtless other power options could be developed: Lockheed says the HULC needs 250 watts on average.

It’s important to note that the HULC is basically a legs and body system only: there’s no enhancement to the user’s arms, though an over-shoulder frame can be fitted allowing a wearer to hoist heavy objects such as artilery shells with the aid of a lifting strop.

The HULC may not be quite ready for prime time yet. But the military exoskeleton as a concept does seem to be getting to the stage of usefulness, at least in niche situations for specific jobs.

The BigDog petrol packmule, an alternative strategy for helping footsoldiers carry their increasingly heavy loads, may now have a serious rival. ®