Toggle light / dark theme

The media is all-abuzz with tales of Artificial Intelligence (AI). The provocative two-letter symbol conjures up images of invading autonomous robot drones and Terminator-like machines wreaking havoc on mankind. Then there’s the pervading presence of deep learning and big data, also referred to as artificial intelligence. This might leave some of us wondering, is artificial intelligence one or all of these things?

In that sense, AI leaves a bit of an ambiguous trail – there does not seem to be a clear definition, even amongst scientists and researchers in the field. There are certainly many different branches of AI. I asked Dr. Roger Schank, Professor Emeritus at Northwestern University, for a more clear definition; he told me that artificial intelligence is not big data and deep learning algorithms, at least not in the pure sense of the definition.

Roger emphasizes that intelligence has everything to do with the intersection of learning and interaction and memory. “I will tell you the number one thing people do, it’s pretty obvious – they talk to each other. Guess how hard that is? That is phenomenally hard, that is the subsection of AI called natural language processing, the part that I worked on my whole life, and I understand how far away we are from that.”

Take a “simple” AI concept, such as how to create a computer that plays chess, to better understand the challenge. There are, more or less, two approaches to creating an intelligent machine that can play chess like a champion. The first approach requires programming the computer to predict thousands of moves ahead of time, while the second approach involves building a computer system that tries to imitate a grand master. In the historical pursuit of how to create an artificially intelligent entity, a vast majority of scientists chose the first option of programming based on prediction.

Predicting thousands of moves sounds like a next to impossible task, but scientists have worked over decades to try and do just that, because the second option — imitation through trial and error — is that much more complex. Still, if we want to be creating an artificial intelligence that thinks on its own, Schank argues that the second option is the more promising of the two. “Some of us always saw that this could be the field that could tell us more about people by pursuing method number two (i.e. imitating the grand master),” he says.

Learning more about people while simultaneously developing true intelligence is why Schank entered the field in the first place. “When we talk about Facebook, we might think about the work of AI and face recognition; this technology has certainly come a long way, but that’s a different part of AI. The part of AI that people imagine – the talking and teaching and thinking robots – most people that talk about AI are not really talking about these questions.”

The famous Turing test, run every year with chat bots, is another example of researchers working towards developing an artificial intelligence, and yet every year there is little doubt that the AI is a computer. “This is not AI, this is something (chat bots) that could “fool” someone,” Roger argues.

In order to make a legitimately useful house robot, for example, scientists would have to solve the natural language problem, the memory problem, and the learning problem. If a future household robot makes a bad meal and overcooks the meat, you want the robot to learn from its mistakes and become smarter through experience. Schank describes this seemingly simple act – learning from mistakes and having a sense of awareness about what to do next – as the hallmark of intelligence.

Schank is particularly interested in AI that can help humans by providing more than just a great restaurant review or telling a joke on command. Currently, he is building a program called ExTRA (Experts Telling Relevant Advice), made for and through DARPA, with the objective of “getting a machine to say the right thing to the right person at the right moment. ” Actually, the emphasis is less on a machine and more on an intelligently organized body of knowledge.

Roger tells a real-life analogy, which starts with a ship traveling through the Suez Canal, when the boiler suddenly catches fire. The Captain starts to put out the fire, when his Superior, who is also on board the ship, asks the Captain what he is doing. “Why, putting out the fire of course!” replies the captain. The Superior orders the Captain to continue through the canal without stopping for the fire, explaining that he cannot stop the ship in the Suez Canal, for reasons relating to corrupt Egyptian officials who will not hesitate to take over the ship and cargo. ‘We’re not doing it, keep going,’ orders the Captain’s Superior.

“I thought that was a weird story”, remarks Schank. It was only later, after meeting a real ship captain and giving him the story premise, that Roger was surprised to find the captain arriving at the same conclusion i.e. ‘Full speed ahead!’ This story serves as an illustration of getting a story, from an expert, at a moment when you need it most – a “just in time” story. There is untold value in receiving wisdom in a timely fashion, often expressed in various cultures through oral or written short stories

On the road to getting a machine to be intelligent, one would have to conquer the “expert in the machine.” Working on artificial intelligence that imitates a human mind is not a clean and streamlined process. Developing a machine or system that can imitate a story-telling human does not necessarily equal an intelligent entity. Does the computer really understands what it’s saying? As far as we can deduce, the answer is still no. At some point, Schank remarks, scientists have to know and incorporate the structure of human memory and learning – the key issue in intelligence – in order to build a truly intelligent machine.

Schank does not believe a true, machine-based AI is going to emerge in his lifetime. There simply has not been enough funding in the appropriate direction of AI research. Yet, Roger believes that in the next 10 years, we can replicate a version of ‘just-in-time’ teaching, an indexed system that helps people think through situations in life by providing them with an extension of mind, a tool that increases human decision-making through helpful and relevant stories.

Dr. Nils J. Nilsson spent almost a lifetime in the field of Artificial Intelligence (AI) before writing and publishing his book, The Quest for Artificial Intelligence (2009). I recently had the opportunity to speak with the former Stanford computer science professor, now retired at the age of 82, and reflect on the earlier accomplishments that have led to some of the current trends in AI, as well as the serious economic and security considerations that need to be made about AI as society moves ahead in the coming decades.

The Early AI that Powers Today’s Trends

One key contribution of early AI developments included rules-based expert systems, such as MYCIN, which was developed in the mid-1970s by Ted Shortliffe and colleagues at Stanford University. The information built into the diagnostic system was gleaned from medical diagnosticians, and the system would then ask questions based on that information. A person could then type in answers about a patient’s tests, symptoms, etc., and the program would then attempt to diagnose diseases and prescribe therapy.

“Bringing us more up to the future was the occurrence of huge databases (in the 1990s) — sometimes called big data — and the ability of computers to mine that data and find information and make inferences,” remarks Nils. This made possible the new work on face recognition, speech recognition, and language translation. “AI really had what might be called a take off at this time.” Both of these technologies also feed into the launch of IBM’s Watson Healthcare, which combines advanced rules-based systems with big data capabilities and promises to give healthcare providers access to powerful tools in a cloud-based data sharing hub.

Work in neural networks, another catalyst, went through two phases, an earlier phase in the 1950s and 1960s and a latter phase in the 1980s and 1990s. “The second phase (of neural networks) allowed…people to make changes in the connected strength in those networks and multiple layers, and this allowed neural networks that can steer and drive automobiles.” More primitive networks led to the cutting-edge work being done by today’s scientists in the self-driven automobile industry via companies like Tesla and Google.

Robotics was also being developed at Stanford in the 1950s and 1960s. A robot could look at its environment and determine the position of objects, could be given tasks and then make a plan of action to achieve the goal. A built-in monitoring system allowed it to evaluate results and it could re-orient itself and get back on track. These early robots used a digital equipment computer that is not nearly as powerful as the technology that we have on a present-day wristwatch, let alone an an autonomous drone.

All of these early technologies led to the trends in industry that are moving and shaking the global economy. An interesting difference worth noting, between the way that old and new technologies were primarily developed, has to do with the driving context. There is a well-known distinction between ‘demand-pull’ technology (using AI to solve existing problems) and ‘push’ technology orientations (AI that is developed from the top down, without necessarily meeting specific user needs).

In the earlier days of AI research, scientists and engineers leaned toward the latter ‘push’ method, as opposed to the ‘demand-pull’ strategy that is more prevalent today (though certainly both exist). Early AI scientists “really wanted to see how far they could leverage the technology”, remarks Nilsson, as they were fascinated by the basic techniques, but not yet sure about purpose.

AI Threats on the Horizon

Global security is an increasing threat made evident by developing technologies. People are and will be monitored more often (at airports, street corners, etc.). There’s also the issue of autonomous weapons, in particular planes or drones that are not guided as they are now. The technology is already available for allowing such machines to make decisions autonomously. This is already a pressing a public issue to which the United States, “as defenders of democracy”, and other nations should pay careful attention to, remarks Nilsson.

The theoretical concerns of AI’s threat to the existence of the human species, so readily covered in the media, are legitimate; however, Nils does not believe they are the only concern for the very near future. Instead, he suggests that AI poses other kinds of relevant threats about which we should also be thinking, such as those risks to the existing economic system.

The issue of employment as a result of increasing automation is one point for real consideration. Economists have made the argument that automation has occurred in the past and that such innovations have not prevented new startups; however, with the seeming inevitable development of human-level cognitive AI, there are many more jobs (certain types of journalism, for example) that machines can also perform more quickly.

“Now, should we regard that as a threat, or should we regard that as ‘well, lots of people have jobs that they don’t like, why should we regard eliminating the need for people to work at those jobs as a threat?”, says Nils. While increased automation will inevitably result in a decreased need for human labor, the question as to what humans will do with more leisure time remains. They could spend time doing more creative things, but this is a serious consideration for humankind.

Nilsson points out that nations will also need to reorient the economy. “The production of goods and services will certainly increase, but will be done by robots and automation; I think the big problem for us is to decide, ‘okay, how do we actually distribute these goods and services to people who aren’t earning a salary?’”

The Need for Real Solutions

What are the potential solutions? On autonomous weapons, Nils believes that there certainly needs to be international collaboration. The United Nations (UN), “which sometimes is not as effective as it should be”, needs to be heavily involved. Nilsson states the need for forming other alliances with NATO, the Chinese, and other rising governments; he’s not sure what to do about the Middle East, which is already a “hot spot.”

Regardless, Nilsson emphasizes that the United States needs to be able to lead the way and set better, well thought-out examples in how we use such technology. “Not only we, as a ‘defender of democracy’ should pay careful attention to and worry about (these threats), but other nations need to do so also.”

On the issue of employment, Nils points to the need to ask some tough questions i.e. “are we going to have a policy of income distribution or reverse income taxes?” There is the problem of people’s ability to purchase, but also the very real consideration as to what people will do with their time, a paradox shift. “In the past, people didn’t have that worry because they had to work…we can’t very well go back to the old breads and circus thing of the Romans,” he jests.

Nilsson believes that this will require the usual avenues of politics and think tanks, though one would assume a much more active and integrated citizenry as well. “One important part is the citizenry has to be more well-informed about these threats than it is at the moment.” Coming up with real solutions requires a real collaborative, multi-pronged effort, one that is all too often easier said than done.

MilanPhotoCollage_md

Hosted by the IEEE Geoscience and Remote Sensing Society, the International Geoscience and Remote Sensing Symposium 2015 (IGARSS 2015) will be held from Sunday July 26th through Friday July 31th, 2015 at the Convention Center in Milan, Italy. This is the same town of the EXPO 2015 exhibition, whose topic is “Feeding the planet: energy for life”.

Read more

http://www.top10films.co.uk/img/data_star-trek_human-robots.jpg

They took each word and looked for other words that often appear nearby in a large corpus of text. They then use an algorithm to see how these words are clustered. The final step is to look up the different meanings of a word in a dictionary and then to match the clusters to each meaning.

This can be done automatically because the dictionary definition includes sample sentences in which the word is used in each different way. So by calculating the vector representation of these sentences and comparing them to the vector representation in each cluster, it is possible to match them. Read more