For the most part, the AI achievements touted in the media aren’t evidence of great improvements in the field. The AI program from Google that won a Go contest last year was not a refined version of the one from IBM that beat the world’s chess champion in 1997; the car feature that beeps when you stray out of your lane works quite differently than the one that plans your route. Instead, the accomplishments so breathlessly reported are often cobbled together from a grab bag of disparate tools and techniques. It might be easy to mistake the drumbeat of stories about machines besting us at tasks as evidence that these tools are growing ever smarter—but that’s not happening.
Public discourse about AI has become untethered from reality in part because the field doesn’t have a coherent theory. Without such a theory, people can’t gauge progress in the field, and characterizing advances becomes anyone’s guess. As a result the people we hear from the most are those with the loudest voices rather than those with something substantive to say, and press reports about killer robots go largely unchallenged.
I’d suggest that one problem with AI is the name itself—coined more than 50 years ago to describe efforts to program computers to solve problems that required human intelligence or attention. Had artificial intelligence been named something less spooky, it might seem as prosaic as operations research or predictive analytics.