Toggle light / dark theme

Amazon And Microsoft Claim AI Can Read Human Emotions. Experts Say the Science Is Shaky

Posted in business, information science, robotics/AI, science

Facial recognition technology is being tested by businesses and governments for everything from policing to employee timesheets. Even more granular results are on their way, promise the companies behind the technology: Automatic emotion recognition could soon help robots understand humans better, or detect road rage in car drivers.

But experts are warning that the facial-recognition algorithms that attempt to interpret facial expressions could be based on uncertain science. The claims are a part of AI Now Institute’s annual report, a nonprofit that studies the impact of AI on society. The report also includes recommendations for the regulation of AI and greater transparency in the industry.

“The problem is now AI is being applied in a lot of social contexts. Anthropology, psychology, and philosophy are all incredibly relevant, but this is not the training of people who come from a technical [computer science] background.” says Kate Crawford, co-founder of AI Now, distinguished research professor at NYU and principal researcher at Microsoft Research. “Essentially the narrowing of AI has produced a kind of guileless acceptance of particular strands of psychological literature that have been shown to be suspect.”

Read more

2 Comments so far

  1. Just like a phonograph record that records sounds, a.i. can be entangled with a persons emotions. Therefore, A.I. can detect emotions.

    But, are emotions universal? I’m tempted to say emotions are universal, but the means to express them are culturally conditioned — Researchers found some asian tribes who express surprise when saying no to someone. That might happen sometimes in western cultures, but not universally.

  2. ” Automatic emotion recognition could soon help robots understand humans better, or detect road rage in car drivers.”

    Or, we could make robots and a.i. act up just like humans.

    And, oh yea, I kept trying to talk to these emotional A.I. researchers about using this technology to deal with terrorism, school shootings and the like, but as you can see, they don’t want to go there. They’re still saying “oh we could use this for detecting emotions in … car drivers!”

    They don’t want a lie detector to detect the emotions of either an actual criminal, or how about a cop, or a F.B.I. agent who can’t emotionally handle some facts/logic that doesn’t fit his cubbyholes?

    Or, how about detecting emotional bias in scientists who don’t want to deal issues because of their conditioned beliefs?

Leave a Reply