Toggle light / dark theme

Microsoft, in collaboration with MITRE research organization and a dozen other organizations, including IBM, Nvidia, Airbus, and Bosch, has released the Adversarial ML Threat Matrix, a framework that aims to help cybersecurity experts prepare attacks against artificial intelligence models.

With AI models being deployed in several fields, there is a rise in critical online threats jeopardizing their safety and integrity. The Adversarial Machine Learning (ML) Threat Matrix attempts to assemble various techniques employed by malicious adversaries in destabilizing AI systems.

AI models perform several tasks, including identifying objects in images by analyzing the information they ingest for specific common patterns. The researchers have developed malicious patterns that hackers could introduce into the AI systems to trick these models into making mistakes. An Auburn University team had even managed to fool a Google LLC image recognition model into misclassifying objects in photos by slightly adjusting the objects’ position in each input image.

Security on the internet is a never-ending cat-and-mouse game. Security specialists constantly come up with new ways of protecting our treasured data, only for cyber criminals to devise new and crafty ways of undermining these defenses. Researchers at TU/e have now found evidence of a highly sophisticated Russian-based online marketplace that trades hundreds of thousands of very detailed user profiles. These personal ‘fingerprints’ allow criminals to circumvent state-of-the-art authentication systems, giving them access to valuable user information, such as credit card details.

Our online economy depends on usernames and passwords to make sure that the person buying stuff or transferring money on the internet, is really the person they are saying. However, this limited way of authentication has proven to be far from secure, as people tend to reuse their passwords across several services and websites. This has led to a massive and highly profitable illegal trade in user credentials: According to a recent estimate (from 2017) some 1.9 billion stolen identities were sold through underground markets in a year’s time.

It will come as no surprise that banks and other have come up with more complex authentication systems, which rely not only on something the users know (their password), but also something they have (e.g. a token). This process, known as multi-factor authentication (MFA), severely limits the potential for cybercrime, but has drawbacks. Because it adds an extra step, many users don’t bother to register for it, which means that only a minority of people use it.

Machine learning (ML) is making incredible transformations in critical areas such as finance, healthcare, and defense, impacting nearly every aspect of our lives. Many businesses, eager to capitalize on advancements in ML, have not scrutinized the security of their ML systems. Today, along with MITRE, and contributions from 11 organizations including IBM, NVIDIA, Bosch, Microsoft is releasing the Adversarial ML Threat Matrix, an industry-focused open framework, to empower security analysts to detect, respond to, and remediate threats against ML systems.

During the last four years, Microsoft has seen a notable increase in attacks on commercial ML systems. Market reports are also bringing attention to this problem: Gartner’s Top 10 Strategic Technology Trends for 2020, published in October 2019, predicts that “Through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, AI model theft, or adversarial samples to attack AI-powered systems.” Despite these compelling reasons to secure ML systems, Microsoft’s survey spanning 28 businesses found that most industry practitioners have yet to come to terms with adversarial machine learning. Twenty-five out of the 28 businesses indicated that they don’t have the right tools in place to secure their ML systems. What’s more, they are explicitly looking for guidance. We found that preparation is not just limited to smaller organizations. We spoke to Fortune 500 companies, governments, non-profits, and small and mid-sized organizations.

Our survey pointed to marked cognitive dissonance especially among security analysts who generally believe that risk to ML systems is a futuristic concern. This is a problem because cyber attacks on ML systems are now on the uptick. For instance, in 2020 we saw the first CVE for an ML component in a commercial system and SEI/CERT issued the first vuln note bringing to attention how many of the current ML systems can be subjected to arbitrary misclassification attacks assaulting the confidentiality, integrity, and availability of ML systems. The academic community has been sounding the alarm since 2004, and have routinely shown that ML systems, if not mindfully secured, can be compromised.

The largest DDoS attack in history was done against Google on 2017. It was done by a state-backed group.

Read article for more details.


The 2.5Tbps attack was likely the work of state-sponsored hackers using internet service providers in China, according to Google.

Hang bugs—when software gets stuck, but doesn’t crash—can frustrate both users and programmers, taking weeks for companies to identify and fix. Now researchers from North Carolina State University have developed software that can spot and fix the problems in seconds.

“Many of us have experience with hang bugs—think of a time when you were on website and the wheel just kept spinning and spinning,” says Helen Gu, co-author of a paper on the work and a professor of computer science at NC State. “Because these bugs don’t crash the program, they’re hard to detect. But they can frustrate or drive away customers and hurt a company’s bottom line.”

With that in mind, Gu and her collaborators developed an automated program, called HangFix, that can detect hang bugs, diagnose the relevant problem, and apply a patch that corrects the root cause of the error.

In the race to launch smallsats into low earth orbit quickly and cost-effectively, operators and manufacturers have compromised on security and left themselves vulnerable to cyber attacks. Let’s not make Newspace a paradise for hackers.

Smallsat operators and manufacturers need to consider why their smallsats are so vulnerable to cyber attacks, the harm attacks can cause, cyber security weaknesses, why basic encryption is not enough and what can be done about it now. These are the issues that this article addresses.