Toggle light / dark theme

A computer programme modelled on the human brain learnt to navigate a virtual maze and take shortcuts, outperforming a flesh-and-blood expert, its developers said Wednesday.

While artificial intelligence (AI) programmes have recently made great strides in imitating human brain processing—everything from recognising objects to playing complicated board games—spatial navigation has remained a challenge.

It requires the recalculation of one’s position, after each step taken, in relation to the starting point and destination—even when travelling a never-before-taken route.

Read more

Many people in tech point out that artificial narrow intelligence, or A.N.I., has grown ever safer and more reliable—certainly safer and more reliable than we are. (Self-driving cars and trucks might save hundreds of thousands of lives every year.) For them, the question is whether the risks of creating an omnicompetent Jeeves would exceed the combined risks of the myriad nightmares—pandemics, asteroid strikes, global nuclear war, etc.—that an A.G.I. could sweep aside for us.


Thinking about artificial intelligence can help clarify what makes us human—for better and for worse.

Read more

With Google’s I/O developer conference kicking off later today, Google is setting the scene for what it expects to be one of the big themes of the event: artificial intelligence. Today, the company rebranded the whole of its Google Research division as Google AI, with the old Google Research site now directing to a newly expanded Google AI site.

Google has over the years worked on a wide variety of other computing pursuits beyond AI, and all of that content will continue to exist within that new site, the company said. But the move signals how Google has increasingly focused a lot of its R&D on breaking new ground across the many facets of AI specifically, from technologies like computer vision, natural language processing, and neural networks, through to applications across virtually any and every business that Google currently and potentially touches, such as video, search and mobile apps, but also healthcare, automotive applications and other verticals.

That’s not just Google reflecting how the wider world of tech is evolving; it’s also a measure of how much Google has influenced it.

Read more

Today we take an amusing look at how science fiction is often portrayed in a jarring way especially when dealing with the topic of life extension.

Those of us who fancy science fiction stories are used to all sorts of technological miracles taking place in them; some are plausible and might become reality at some point in the future, while others are mere fantasies, artistic liberties that are taken to tell a better story and will likely never translate into real-life technologies—or, if they will, they will do so at the cost of rethinking fundamental principles that we’ve thus far considered to be fully established.

In science fiction, we’ve seen faster-than-light travel, teleportation, portals, energy weapons, strong AI, telepathic powers, and radiation-induced superpowers of all kinds; unfortunately, the only “superpower” known to be actually induced by radiation thus far is cancer. Entire imaginary worlds have revolved around the existence of one or more of these marvels, and series and shows have assumed that they’re possible and imagined what our society would be like with them, but one particular possibility has been neglected or relegated to one or two episodes and then forgotten, as if it was of no importance whatsoever: the defeat of aging.

Read more

So much talk about AI and robots taking our jobs. Well, guess what, it’s already happening and the rate of change will only increase. I estimate that about 5% of jobs have been automated — both blue collar manufacturing jobs, as well as, this time, low-level white collar jobs — think back office, paralegals, etc. There’s a thing called RPA, or Robot Process Automation, which is hollowing out back office jobs at an alarming rate, using rules based algorithms and expert systems. This will rapidly change with the introduction of deep learning algorithms into these “robot automation” systems, making them intelligent, capable of making intuitive decisions and therefore replacing more highly skilled and creative jobs. So if we’re on an exponential curve, and we’ve managed to automate around 5% of jobs in the past six years, say, and the doubling is every two years, that means by 2030, almost all jobs will be automated. Remember, the exponential math means 1, 2, 4, 8, 16, 32, 64, 100%, with the doubling every two years.

We are definitely going to need a basic income to prevent people (doctors, lawyers, drivers, teachers, scientists, manufacturers, craftsmen) from going homeless once their jobs are automated away. This will need to be worked out at the government level — the sooner the better, because exponentials have a habit of creeping up on people and then surprising society with the intensity and rapidity of the disruptive change they bring. I’m confident that humanity can and will rise to the challenges ahead, and it is well to remember that economics is driven by technology, not the other way around. Education, as usual, is definitely the key to meeting these challenges head on and in a fully informed way. My only concern is when governments will actually start taking this situation seriously enough to start taking bold action. There certainly is no time like the present.

Read more

Take a listen to the recordings. That’s an AI doing that.


A long-standing goal of human-computer interaction has been to enable people to have a natural conversation with computers, as they would with each other. In recent years, we have witnessed a revolution in the ability of computers to understand and to generate natural speech, especially with the application of deep neural networks (e.g., Google voice search, WaveNet). Still, even with today’s state of the art systems, it is often frustrating having to talk to stilted computerized voices that don’t understand natural language. In particular, automated phone systems are still struggling to recognize simple words and commands. They don’t engage in a conversation flow and force the caller to adjust to the system instead of the system adjusting to the caller.

Today we announce Google Duplex, a new technology for conducting natural conversations to carry out “real world” tasks over the phone. The technology is directed towards completing specific tasks, such as scheduling certain types of appointments. For such tasks, the system makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine.

One of the key research insights was to constrain Duplex to closed domains, which are narrow enough to explore extensively. Duplex can only carry out natural conversations after being deeply trained in such domains. It cannot carry out general conversations.

Read more

Summary: A new study sheds light on how the cerebellum is able to make predictions and learn from mistakes, especially when it comes to completing complex motor actions. The findings could help in the development of new machine learning technologies.

Source: Johns Hopkins Medicine.

In studies with monkeys, Johns Hopkins researchers report that they have uncovered significant new details about how the cerebellum — the “learning machine” of the mammalian brain — makes predictions and learns from its mistakes, helping us execute complex motor actions such as accurately shooting a basketball into a net or focusing your eyes on an object across the room.

Read more