Deepfakes are the most concerning use of AI for crime and terrorism, according to a new report from University College London.
The research team first identified 20 different ways AI could be used by criminals over the next 15 years. They then asked 31 AI experts to rank them by risk, based on their potential for harm, the money they could make, their ease of use, and how hard they are to stop.
Deepfakes — AI-generated videos of real people doing and saying fictional things — earned the top spot for two major reasons. Firstly, they’re hard to identify and prevent. Automated detection methods remain unreliable and deepfakes also getting better at fooling human eyes. A recent Facebook competition to detect them with algorithms led researchers to admit it’s “very much an unsolved problem.”