Toggle light / dark theme

In spite of the popular perception of the state of artificial intelligence, technology has yet to create a robot with the same instincts and adaptability as a human. While humans are born with some natural instincts that have evolved over millions of years, Neuroscientist and Artificial Intelligence Expert Dr. Danko Nikolic believes these same tendencies can be instilled in a robot.

“Our biological children are born with a set of knowledge. They know where to learn, they know where to pay attention. Robots simply can not do that,” Nikolic said. “The problem is you can not program it. There’s a trick we can use called AI Kindergarten. Then we can basically interact with this robot kind of like we do with children in kindergarten, but then make robots learn one level lower, at the level of something called machine genome.”

Programming that machine genome would require all of the innate human knowledge that’s evolved over thousands of years, Nikolic said. Lacking that ability, he said researchers are starting from scratch. While this form of artificial intelligence is still in its embryonic state, it does have some evolutionary advantages that humans didn’t have.

“By using AI Kindergarten, we don’t have to repeat the evolution exactly the way evolution has done it,” Nikolic said. “This experiment has been done already and the knowledge is already stored in our genes, so we can accelerate tremendously. We can skip millions of failed experiments where evolution has failed already.”

Rather than jumping into logic or facial recognition, researchers must still begin with simple things, like basic reflexes and build on top of that, Nikolic said. From there, we can only hope to come close to the intelligence of an insect or small bird.

“I think we can develop robots that would be very much biological, like robots, and they would behave as some kind of lower level intelligence animal, like a cockroach or lesser intelligent birds,” he said. “(The robots) would behave the way (animals) do and they would solve problems the way they do. It would have the flexibility and adaptability that they have and that’s much, much more than what we have today.”

As that machine genome continues to evolve, Nikolic compared the potential manipulation of that genome to the selective breeding that ultimately evolved ferocious wolves into friendly dogs. The results of robotic evolution will be equally benign, and he believes, any attempts to develop so-called “killer robots” won’t happen overnight. Just as it takes roughly 20 years for a child to fully develop into an adult, Nikolic sees an equally long process for artificial intelligence to evolve.

Nikolic cited similar attempts in the past where the manipulation of the genome of biological systems produced a very benign result. Further, he doesn’t foresee researchers creating something dangerous, and given his theory that AI could develops from a core genome, then it would be next to impossible to change the genome of a machine or of a biological system by just changing a few parts.

Going forward, Nikolic still sees a need for caution. Building some form of malevolent artificial intelligence is possible, he said, but the degree of difficulty still makes it unlikely.

“We can not change the genome of machine or human simply by changing a few parts and then having the thing work as we want. Making it mean is much more difficult than developing a nuclear weapon,” Nikolic said. “I think we have things to watch out for, and there should be regulation, but I don’t think this is a place for some major fear… there is no big risk. What we will end up with, I believe, will be a very friendly AI that will care for humans and serve humans and that’s all we will ever use.”

It may hurt your brain to think about it, but it appears that the answer is possibly to be yes, or at least the numbers are almost in the same ballpark.

Astrophysicists in fact set out to answer this question about a decade ago. It’s a complicated problem to solve, but it’s somewhat easier if you throw in a couple of qualifiers — that we are talking about stars in the observable universe; and grains of sand on the whole planet, not just the seashores.

The researchers started by calculating the luminosity density of a section of the cosmos — this is a calculation of how much light is in that space. They then utilized this calculation to guess the number of stars needed to make that amount of light. This was quite a mathematical challenge!

“You have to suppose that you can have one type of star signify all types of stars,” says astrophysicist Simon Driver, Professor at the International Centre for Radio Astronomy Research in Western Australia and one of the researchers who worked on the question.

“Then let’s suppose, on average, this is a normal mass star that gives out the normal amount of light, so if I know that a part of the universe is producing this amount of light, I can now say how many stars that would associate to.”

Now armed with a guess of the number of stars within a section of the cosmos, the next challenge was to work out the size of the cosmos. Given we know that the cosmos is 13.8 billion years old, we can suppose that we exist in a sphere 13.8 billion light years in volume. But there’s a catch: the universe is possibly immeasurable in size.

Read more

Ugh, this is just typical. You think you know the way the world works: wind blows, fire burns, wheels spin and – wait, what’s this thing doing?

What? You mean, it can actually move in any direction without so much as turning on an axis? That’s blowing my mind. I’m no gear head, but I’m sort of attached to having a steering wheel in my car, you know? Now you’re saying that self-driving cars will take those away, and now there won’t even be wheels to turn in the direction you want to go in?

Read more

A computer simulation of a cognitive model entirely made up of artificial neurons learns to communicate through dialog starting from a state of tabula rasa —

A group of researchers from the University of Sassari (Italy) and the University of Plymouth (UK) has developed a cognitive model, made up of two million interconnected artificial neurons, able to learn to communicate using human language starting from a state of ‘tabula rasa’, only through communication with a human interlocutor. The model is called ANNABELL (Artificial Neural Network with Adaptive Behavior Exploited for Language Learning) and it is described in an article published in PLOS ONE. This research sheds light on the neural processes that underlie the development of language.

How does our brain develop the ability to perform complex cognitive functions, such as those needed for language and reasoning? This is a question that certainly we are all asking ourselves, to which the researchers are not yet able to give a complete answer. We know that in the human brain there are about one hundred billion neurons that communicate by means of electrical signals. We learned a lot about the mechanisms of production and transmission of electrical signals among neurons. There are also experimental techniques, such as functional magnetic resonance imaging, which allow us to understand which parts of the brain are most active when we are involved in different cognitive activities. But a detailed knowledge of how a single neuron works and what are the functions of the various parts of the brain is not enough to give an answer to the initial question.

Read more

An artificial intelligence program received such high scores on a standardized test that it’d have an 80% chance of getting into a Japanese university.

The Wall Street Journal reports that the program, developed by Japan’s National Institute of Informatics, took a multi-subject college entrance exam and passed with an above-average score of 511 points out of a possible 950. (The national average is 416.) With scores like that, it has an 8 out of 10 chance of being admitted to 441 private institutions in Japan, and 33 national ones.

The AI took some time to perfect, and it still has a ways to go. The team had been working on the program since 2011, the same year IBM’s Watson dominated Jeopardy! champions Ken Jennings and Brad Rutter in a multi-day tournament. Previously, the Japanese AI program had received below-average results, but this time around, the robot did particularly well in math and history questions, which have straightforward answers, but it still received iffy marks in the physics section of the test, which requires advanced language processing skills.

Read more

1*bD9nGMBuGWBdawdmrMmzVA

“The subtitle to this post is a variation of William Gibson’s famous remark: “The future is already here — it’s just not very evenly distributed.” An obvious follow up question is: if the future is already here, where can I find it?”

Read more

Welcome to #24 Avatar Technology Digest! We provide you with the latest news on Technology, Medical Cybernetics and Artificial Intelligence the best way we can. Here are the top stories of the last week!

1) Did you know that Disney does more than shoot box office hits and sell toys to your kids? They also have a very active Research Department that specializes in a variety of applications that can be used throughout the Disney empire. And now another interesting innovation has come out of the Research Department, as they have developed a method for generating those 3D printable robots without the need for time and energy-consuming work at all.

2) Being able to identify problems with a person’s body without subjecting them to invasive procedures is the fantasy of all Star Trek doctors. There’s even a prize offering a fortune to anyone who can effectively recreate the tricorder technology out in the real world. Now, Stanford scientists think that they’ve developed a system that, in time, could be used to spot cancerous tumors from a foot away.

3) Technology is all around us, but what happened to the robots we dreamed of as kids? The ones who could be our friends and members of our family. The robots who were as smart as our smart phone, but could walk and talk and learn and engage with us, in a way no smart phone ever could. We think the Human-like household robot Alpha 2 by Ubtech Robotics could finally be that robot, and with your support, we can make Alpha 2 a reality.

4) Imagine playing a virtual-reality boxing game, complete with a menacing opponent aiming a haymaker at your head. You get your gloves up in time to block the punch, but you feel no impact when it hits, breaking the otherwise immersive experience.
Researchers in Germany have developed Visual reality technology for an armband that lets you feel impact from virtual interactions.

TV Anchor: Olesya Yermakova @olesyayermakova
Video: Vladimir Shlykov www.GetYourMedia.ru
Hair&Make-up: Nataliya Starovoytova

Read more