Forget super-AI. Crappy AI is more likely to be our downfall, argues researcher.
The past couple of years have been a real cringe-y time to be an AI researcher. Just imagine a whole bunch of famous technologists and top-serious science authorities all suddenly taking aim at your field of research as a clear and present threat to the very survival of the species. All you want to do is predict appropriate emoji use based on textual analyses and here’s Elon Musk saying this thing he doesn’t really seem to know much about is the actual apocalypse.
It’s not that computer scientists haven’t argued against AI hype, but an academic you’ve never heard of (all of them?) pitching the headline “AI is hard” is at a disadvantage to the famous person whose job description largely centers around making big public pronouncements. This month that academic is Alan Bundy, a professor of automated reasoning at the University of Edinburgh in Scotland, who argues in the Communications of the ACM that there is a real AI threat, but it’s not human-like machine intelligence gone amok. Quite the opposite: the danger is instead shitty AI. Incompetent, bumbling machines.