Toggle light / dark theme

Can an emotional component to artificial intelligence be a benefit?

Robots with passion! Emotional artificial intelligence! These concepts have been in books and movies lately. A recent example of this is the movie Ex Machina. Now, I’m not an AI expert, and cannot speak to the technological challenges of developing an intelligent machine, let alone an emotional one. I do however, know a bit about problem solving, and that does relate to both intelligence and emotions. It is this emotional component of problem solving that leads me to speculate on the potential implications to humanity if powerful AI’s were to have human emotions.

Why the question about emotions? In a roundabout way, it has to do with how we observe and judge intelligence. The popular way to measure intelligence in a computer is the Turing test. If it can fool a person through conversation, into thinking that the computer is a person, then it has human level intelligence. But we know that the Turing test by itself is insufficient to be a true intelligence test. Sounding human during dialog is not the primary method we use to gauge intelligence in other people or in other species. Problem solving seems to be a reliable test of intelligence either through IQ tests that involve problem solving, or through direct real world problem solving.

As an example of problem solving, we judge how intelligent a rat is by how fast it can navigate a maze to get to food. Let’s look at this in regards to the first few steps in problem solving.

Fundamental to any problem solving, is recognizing that a problem exists. In this example, the rat is hungry. It desires to be full. It can observe its current state (hungry) and compare it with its desired state (full) and determine that a problem exists. It is now motivated to take action.

Desire is intimately tied to emotion. Since it is desire that allows the determination of whether or not a problem exists, one can infer that emotions allow for the determination that a problem exists. Emotion is a motivator for action.

Once a problem is determined to exist, it is important to define the problem. In this simple example this step isn’t very complex. The rat desires food, and food is not present. It must find food, but its options for finding food are constrained by the confines of the maze. But the rat may have other things going on. It might be colder than it would prefer. This presents another problem. When confronted with multiple problems, the rat must prioritize which problem to address first. Problem prioritization again is in the realm of desires and emotions. It might be mildly unhappy with the temperature, but very unhappy with its hunger state. In this case one would expect that it will maximize its happiness by solving the food problem before curling up to solve its temperature problem. Emotions are again in play, driving behavior which we see as action.

The next steps in problem solving are to generate and implement a solution to the problem. In our rat example, it will most likely determine if this maze is similar to ones it has seen in the past, and try to run the maze as fast as it can to get to the food. Not a lot of emotion involved in these steps with the possible exception of happiness if it recognizes the maze. However, if we look at problems that people face, emotion is riddled in the process of developing and implementing solutions. In the real world environment, problem solving almost always involves working with other people. This is because they are either the cause of the problem, or are key to the problem’s solution, or both. These people have a great deal of emotions associated with them. Most problems require negation to solve. Negotiation by its nature is charged with emotion. To be effective in problem solving a person has to be able to interpret and understand the wants and desires (emotions) of others. This sounds a lot like empathy.

Now, let’s apply the emotional part of problem solving to artificial intelligence. The problem step of determining whether or not a problem exists doesn’t require emotion if the machine in question is a thermostat or a Roomba. A thermostat doesn’t have its own desired temperature to maintain. Its desired temperature is determined by a human and given to the thermostat. That human’s desires are a based on a combination of learned preferences from personal experience, and hardwired preferences based on millions of years of evolution. The thermostat is simply a tool.

Now the whole point behind an AI, especially an artificial general intelligence, is that it is not a thermostat. It is supposed to be intelligent. It must be able to problem solve in a real world environment that involves people. It has to be able to determine that problems exists and then prioritize those problems, without asking for a human to help it. It has to be able to socially interact with people. It must identify and understand their motivations and emotions in order to develop and implement solutions. It has to be able to make these choices which are based on desires, without the benefit of millions of years of evolution that shaped the desires that we have. If we want it to be able to truly pass for human level intelligence, it seems we’ll have to give it our best preferences and desires to start with.

A machine that cannot chose its goals, cannot change its goals. A machine without that choice, if given the goal of say maximizing pin production, will creatively and industriously attempt to convert the entire planet into pins. Such a machine cannot question instructions that are illegal or unethical. Here lies the dilemma. What is more dangerous, the risk that someone will program an AI that has no choice, to do bad things, or the risk that an AI will decide to do bad things on its own?

No doubt about it, this is a tough call. I’m sure some AIs will be built with minimal or no preferences with the intent that it will be simply a very smart tool. But without giving an AI a set of desires and preferences to start with that are comparable to those of humans, we will be interacting with a truly alien intelligence. I for one, would be happier with an AI that at least felt regret about killing someone, than I would be with an AI that didn’t.

Transhumanists are into improvements, and many talk about specific problems, for instance Nick Bostrom. However, Bostrom’s problem statements have been criticized for not necessarily being problems, and I think largely this is why one must consider the problem definition (see step #2 below).

Sometimes people talk about their “solutions” for problems, for instance this one in H+ Magazine. But in many cases they are actually talking about their ideas of how to solve a problem, or making science-fictional predictions. So if you surf the web, you will find a lot of good ideas about possibly important problems—but a lot of what you find will be undefined (or not very well defined) problem ideas and solutions.

These proposed solutions often do not attempt to find root causes or assume the wrong root cause. And finding a realistic complete plan for solving a problem is rare.

8D (Eight Disciplines) is a process used in various industries for problem solving and process improvement. The 8D steps described below could be very useful for transhumanists, not just for talking about problems but for actually implementing solutions in real life.

Transhuman concerns are complex not just technologically, but also socioculturally. Some problems are more than just “a” problem—they are a dynamic system of problems and the process for problem solving itself is not enough. There has to be management, goals, etc., most of which is outside the scope of this article. But first one should know how deal with a single problem before scaling up, and 8D is a process that can be used on a huge variety of complex problems.

Here are the eight steps of 8D:

  1. Assemble the team
  2. Define the problem
  3. Contain the problem
  4. Root cause analysis
  5. Choose the permanent solution
  6. Implement the solution and verify it
  7. Prevent recurrence
  8. Congratulate the team

More detailed descriptions:

1. Assemble the Team

Are we prepared for this?

With an initial, rough concept of the problem, a team should be assembled to continue the 8D steps. The team will make an initial problem statement without presupposing a solution. They should attempt to define the “gap” (or error)—the big difference between the current problematic situation and the potential fixed situation. The team members should all be interested in closing this gap.

The team must have a leader; this leader makes agendas, synchronizes actions and communications, resolves conflicts, etc. In a company, the team should also have a “sponsor”, who is like a coach from upper management. The rest of the team is assembled as appropriate; this will vary depending on the problem, but some general rules for a candidate can be:

  • Has a unique point of view.
  • Logistically able to coordinate with the rest of the team.
  • Is not committed to preconceived notions of “the answer.”
  • Can actually accomplish change that they might be responsible for.

The size of an 8D team (at least in companies) is typically 5 to 7 people.

The team should be justified. This matters most within an organization that is paying for the team, however even a group of transhumanists out in the wilds of cyberspace will have to defend themselves when people ask, “Why should we care?”

2. Define the Problem

What is the problem here?

Let’s say somebody throws my robot out of an airplane, and it immediately falls to the ground and breaks into several pieces. This customer then informs me that this robot has a major problem when flying after being dropped from a plane and that I should improve the flying software to fix it.

Here is the mistake: The problem has not been properly defined. The robot is a ground robot and was not intended to fly or be dropped out of a plane. The real problem is that a customer has been misinformed as to the purpose and use of the product.

When thinking about how to improve humanity, or even how to merely improve a gadget, you should consider: Have you made an assumption about the issue that might be obscuring the true problem? Did the problem emerge from a process that was working fine before? What processes will be impacted? If this is an improvement, can it be measured, and what is the expected goal?

The team should attempt to grok the issues and their magnitude. Ideally, they will be informed with data, not just opinions.

Just as with medical diagnosis, the symptoms alone are probably not enough input. There are various ways to collect more data, and which methods you use depends on the nature of the problem. For example, one method is the 5 W’s and 2 H’s:

  • Who is affected?
  • What is happening?
  • When does it occur?
  • Where does it happen?
  • Why is it happening (initial understanding)?
  • How is it happening?
  • How many are affected?

For humanity-affecting problems, I think it’s very important to define what the context of the problem is.

3. Contain the Problem

Containment

Some problems are urgent, and a stopgap must be put in place while the problem is being analyzed. This is particularly relevant for problems such as product defects which affect customers.

Some brainstorming questions are:

  • Can anything be done to mitigate the negative impact (if any) that is happening?
  • Who would have to be involved with that mitigation?
  • How will the team know that the containment action worked?

Before deploying an interim expedient, the team should have asked and answered these questions (they essentially define the containment action):

  • Who will do it?
  • What is the task?
  • When will it be accomplished?

A canonical example: You have a leaky roof (the problem). The containment action is to put a pail underneath the hole to capture the leaking water. This is a temporary fix until the roof is properly repaired, and mitigates damage to the floor.

Don’t let the bucket of water example fool you—containment can be massive, e.g. corporate bailouts. Of course, the team must choose carefully: Is the cost of containment worth it?

4. Root Cause Analysis

There can be many layers of causation

Whenever you think you have an answer to a problem, as yourself: Have you gone deep enough? Or is there another layer below? If you implementt a fix, will the problem grow back?

Generally in the real world events are causal. The point of root cause analysis is to trace the causes all the way back for your problem. If you don’t find the origin of the causes, then the problem will probably rear its ugly head again.

Root cause analysis is one of the most overlooked, yet important, steps of problem solving. Even engineers often lose their way when solving a problem and jump right into a fix which later on turned out to be a red herring.

Typically, driving to root cause follows one of these two routes:

  1. Start with data; develop theories from that data.
  2. Start with a theory; search for data to support or refute it.

Either way, team members must always remember keep in mind that correlation is not necessarily causation.

One tool to use is the 5 Why’s, in which you move down the “ladder of abstraction” by continually asking: “why?” Start with a cause and ask why this cause is responsible for the gap (or error). Then ask again until you’ve bottomed out with something that may be a true root cause.

There are many other general purpose methods and tools to assist in this stage; I will list some of them here, but please look them up for detailed explanations:

  • Brainstorming: Generate as many ideas as possible, and elaborate on the best ideas.
  • Process flow analysis: Flowchart a process; attempt to narrow down what element in the flow chart is causing the problem.
  • Fishikawa: Use a Fishikawa (aka Cause and Effect) diagram to try narrowing down the cause(s).
  • Pareto analysis: Generate a Pareto chart, which may indicate which cause (of many) should be fixed first.
  • Data analysis: Use trend charts, scatter plots, etc. to assist in finding correlations and trends.

And that is just the beginning—a problem may need a specific new experiment or data collection method devised.

Ideally you would have a single root cause, but that is not always the case.

The team should also come up with various correction actions that solve the root cause, to be selected and refined in the next step.

5. Choose the Permanent Solution

The solution must be one or more corrective actions that solve the cause(s) of the problem. Corrective action selection is additionally guided by criteria such as time constraints, money constraints, efficiency, etc.

This is a great time to simulate/test the solution, if possible. There might be unaccounted for side effects either in the system you fixed or in related systems. This is especially true for some of the major issues that transhumanists wish to tackle.

You must verify that the corrective action(s) will in fact fix the root cause and not cause bad side effects.

6. Implement the Solution and Verify It

This is the stage when the team actually sets into motion the correction action(s). But doing it isn’t enough—the team also has to check to see if the solution is really working.

For some issues the verification is clean-cut. Some corrective actions have to be evaluated with effectiveness, for instance some benchmark. Depending on the time scale of the corrective action, the team might need to add various monitors and/or controls to continually make sure the root cause is squashed.

7. Prevent Recurrence

It’s possible that a process will revert back to its old ways after the problem has been solved, resulting in the same type of problem happening again. So the team should provide the organization or environment with improvements to processes, procedures, practices, etc. so that this type of problem does not resurface.

8. Congratulate the Team

Party time! The team should share and publicize the knowledge gained from the process as it will help future efforts and teams.

Image credits:
1. Inception (2010), Warner Bros.
2. Peter Galvin
3. Tom Parnell
4. shalawesome