Toggle light / dark theme

This article is part of our reviews of AI research papers, a series of posts that explore the latest findings in artificial intelligence.

With their millions and billions of numerical parameters, deep learning models can do many things: detect objects in photos, recognize speech, generate text—and hide malware. Neural networks can embed malicious payloads without triggering anti-malware software, researchers at the University of California, San Diego, and the University of Illinois have found.

Their malware-hiding technique, EvilModel, sheds light on the security concerns of deep learning, which has become a hot topic of discussion in machine learning and cybersecurity conferences. As deep learning becomes ingrained in applications we use every day, the security community needs to think about new ways to protect users against their emerging threats.

China has developed what it calls a Quantum Satellite System in a bid to combat any adversary intrusion into its power infrastructure. The country boasts the world’s largest national power grid.

Cyber attackers could target 3D printed objects in health care, aerospace, and other fields.

Cybersecurity researchers at Rutgers University-New Brunswick and the Georgia Institute of Technology have proposed new ways to protect 3D printed objects such as drones, prostheses, and medical devices from stealthy “logic bombs.”

The researchers will present their paper, titled “Physical Logic Bombs in 3D Printers via Emerging 4D Techniques,” at the 2021 Annual Computer Security Applications Conference on December 10, 2021.

Security experts around the world raced Friday to patch one of the worst computer vulnerabilities discovered in years, a critical flaw in open-source code widely used across industry and government in cloud services and enterprise software.

“I’d be hard-pressed to think of a company that’s not at risk,” said Joe Sullivan, chief security officer for Cloudflare, whose online infrastructure protects websites from malicious actors. Untold millions of servers have it installed, and experts said the fallout would not be known for several days.

New Zealand’s computer emergency response team was among the first to report that the flaw in a Java-language utility for Apache servers used to log user activity was being “actively exploited in the wild” just hours after it was publicly reported Thursday and a patch released.

The Artificial Intelligence industry should create a global community of hackers and “threat modelers” dedicated to stress-testing the harm potential of new AI products in order to earn the trust of governments and the public before it’s too late.

This is one of the recommendations made by an international team of risk and machine-learning experts, led by researchers at the University of Cambridge’s Center for the Study of Existential Risk (CSER), who have authored a new “call to action” published today in the journal Science.

They say that companies building intelligent technologies should harness techniques such as “red team” hacking, audit trails and “bias bounties”—paying out rewards for revealing ethical flaws—to prove their integrity before releasing AI for use on the wider public.

Google said Tuesday it has moved to shut down a network of about one million hijacked electronic devices used worldwide to commit online crimes, while also suing Russia-based hackers the tech giant claimed were responsible.

The so-called botnet of infected devices, which was also used to surreptitiously mine bitcoin, was cut off at least for now from the people wielding it on the internet.

“The operators of Glupteba are likely to attempt to regain control of the botnet using a backup command and control mechanism,” wrote Shane Huntley and Luca Nagy from Google’s threat analysis group.