Toggle light / dark theme

Google has smartened up several of its products with a type of artificial intelligence called deep learning, which involves training neural networks on lots of data and then having them make predictions about new data. Google Maps, Google Photos, and Gmail, for example, have been enhanced with this type of technology. The next service that could see gains is Google Translate.

Well, let me back up. Part of Google Translate actually already uses deep learning. That would be the instant visual translations you can get on a mobile device when you hold up your smartphone camera to the words you want to translate. But if you use Google Translate to just translate text, you know that the service isn’t always 100 percent accurate.

In an interview at the Structure Data conference in San Francisco today, Jeff Dean, a Google senior fellow who worked on some of Google’s core search and advertising technology and is now the head of the Google Brain team that works on deep learning, said that his team has been working with Google’s translation team to scale out experiments with translation based on deep learning. Specifically, the work is based on the technology depicted in a 2014 paper entitled “Sequence to Sequence Learning with Neural Networks.”

Read more

Google, AI, and Quantum — Google believes deep learning is not suitable on Quantum. Not so sure that I agree with this position because deep learning in principle is “a series of complex algorithms that attempt to model high-level abstractions in data by using multiple processing layers with complex structures” — the beauty around quantum is it’s performance in processing of vast sets of information and complex algorithms. Maybe they meant to say at this point they have not resolved that piece for AI.


Artificial intelligence is one of the hottest subjects these days, and recent advances in technology make AI even closer to reality than most of us can imagine.

The subject really got traction when Stephen Hawking, Elon Musk and more than 1,000 AI and robotics researchers signed an open letter issuing a warning regarding the use of AI in weapons development last year. The following month, BAE Systems unveiled Taranis, the most advanced autonomous UAV ever created; there are currently 40 countries working on the deployment of AI in weapons development.

Those in the defense industry are not the only ones engaging in an arms race to create advanced AI. Tech giants Facebook, Google, Microsoft and IBM are all engaging in various AI-initiatives, as well as competing on developing digital personal assistants like Facebook’s M, Cortana from Microsoft and Apple’ Siri.

Read more

Allen Institute working with Baylor on reconstructing neuronal connections.


The Intelligence Advanced Research Projects Activity (IARPA) has awarded an $18.7 million contract to the Allen Institute for Brain Science, as part of a larger project with Baylor College of Medicine and Princeton University, to create the largest ever roadmap to understand how the function of networks in the brain’s cortex relates to the underlying connections of its individual neurons.

The project is part of the Machine Intelligence from Cortical Networks (MICrONS) program, which seeks to revolutionize machine learning by reverse-engineering the algorithms of the brain.

“This effort will be the first time that we can physically look at more than a thousand connections between neurons in a single cortical network and understand how those connections might allow the network to perform functions, like process visual information or store memories,” says R. Clay Reid, Ph.D., Senior Investigator at the Allen Institute for Brain Science, Principal Investigator on the project.

Read more

Another data scientist with pragmatic thinking which is badly needed today. Keeping it real with Una-May O’Reilly.


Mumbai: Una-May O’Reilly, principal research scientist at Anyscale Learning For All (ALFA) group at the Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory, has expertise in scalable machine learning, evolutionary algorithms, and frameworks for large-scale, automated knowledge mining, prediction and analytics. O’Reilly is one of the keynote speakers at the two-day EmTech India 2016 event, to be held in New Delhi on 18 March.

In an email interview, she spoke, among other things, about how machine learning underpins data-driven artificial intelligence (AI), giving the ability to predict complex events from predictive cues within streams of data. Edited excerpts:

When you say that the ALFA group aims at solving the most challenging Big Data problems—questions that go beyond the scope of typical analytics—what do you exactly mean?

Typical analytics visualize and retrieve direct information in the data. This can be very helpful. Visualizations allow one to discern relationships and correlations, for example. Graphs and charts plotting trends and comparing segments are informative. Beyond its value for typical analytics, one should also be aware that the data has latent (that is, hidden) predictive power. By using historical examples, machine learning makes it possible to build predictive models from data. What segments are likely to spend next month? Which students are likely to drop out? Which patient may suffer an acute health episode? Predictive models of this sort rely upon historical data and are vital. Predictive analytics is new, exciting and what my group aims to enable technologically.

Read more

I believe there are good advances in AI due to the processing performance; however, as I highlighted earlier many of the principles like complex algorithms along with the pattern & predictive analysis of large volumes of information hasn’t changed much from my own work in the early days with AI. Where I have concerns and is the foundational infrastructure that “connected” AI resides on. Ongoing hacking and attacks of today could actually make AI adoption fall really short; and in the long run cause AI to look pretty bad.


A debate in New York tries to settle the question.

By Larry Greenmeier on March 10, 2016.

Read more

When I work on AI today and looking at it’s fundamental principles; it is not that much different from the work that I and another team mate many years ago did around developing a RT Proactive Environmental Response System. Sure there are some differences between processors, etc. However, the principles are the same when you consider some of the extremely complex algorithms that we had to develop to ensure that our system could proactively interrupt patterns and proactively act on it’s own analysis. We did have a way to override any system actions.


These questions originally appeared on Quorathe knowledge sharing network where compelling questions are answered by people with unique insights.

Answers by Neil Lawrence, Professor of Machine Learning at the University of Sheffield, on Quora.

Q: What do you think about the impact of AI and ML on the job market in 10, 20, 50 years from now?

A:I think predicting anything over longer than a 10 year horizon is very difficult. But there are a few thoughts I have on this question.

Read more

US Government’s cool $100 mil in brain research. As we have been highlighting over the past couple of months that the US Government’s IARPA and DARPA program’s have and intends to step up their own efforts in BMIs and robotics for the military; I am certain that this research will help their own efforts and progress.


Intelligence project aims to reverse-engineer the brain to find algorithms that allow computers to think more like humans.

By Jordana Cepelewicz on March 8, 2016.

Read more

Don’t let the title mislead you — Quantum is not going to require AI to operate or develop it’s computing capabilities. However, what is well known across Quantum communities is that AI will greatly benefit from the processing capabilities & performance of Quantum Computing. There has been a strong interest in marrying the 2 together. However, Quantum maturity gap and timing has not made that possible until recently resulting from the various discoveries in microchip development, programming language (Quipper) development, Q-Dots Silicon wafers, etc.


Researchers at the University of Vienna have created an algorithm that helps plan experiments in this mind-boggling field.

Read more

Glad to see this article get published because it echoes many of the concerns established around China and Russia governments and their hackers having their infrastructures on Quantum before US, Europe, and Canada. Computer scientists at MIT and the University of Innsbruck say they’ve assembled the first five quantum bits (qubits) of a quantum computer that could someday factor any number, and thereby crack the security of traditional encryption schemes.


Shor’s algorithm performed in a system less than half the size experts expected.

Read more