Toggle light / dark theme

TOKYO — Leading information technology companies are rushing to create systems that use artificial intelligence to defend against cyberattacks. The goal is to commercialize AI software to detect even ingeniously designed attacks, identify the perpetrators, and quickly mount a defense.

However, research is also taking place in the U.S. and elsewhere on ways to harness AI for cyberwarfare, and the trend suggests there will come a time when the battles in cyberspace pit AI against AI, leaving humans sidelined.

Fujitsu Laboratories, the R&D unit of Japanese IT giant Fujitsu, has begun to develop an AI system to protect corporate information systems from cyberattack. The system would learn to recognize regular patterns of network activity so deviant behavior stands out. The company aims to have a commercial product ready in two to three years that could uncover and respond to attacks even from hackers who intentionally space out their login attempts so they are difficult to discover.

Read more

Researchers from Yale University have unveiled CertiKOS, the world’s first operating system that runs on multi-core processors and shields against cyber-attacks. Scientists believe this could lead to a new generation of reliable and secure systems software.

Led by Zhong Shao, professor of computer science at Yale, the researchers developed an operating system that incorporates formal verification to ensure that a program performs precisely as its designers intended — a safeguard that could prevent the hacking of anything from home appliances and Internet of Things (IoT) devices to self-driving cars and digital currency. Their paper on CertiKOS was presented at the 12th USENIX Symposium on Operating Systems Design and Implementation held Nov. 2–4 in Savannah, Ga.

Computer scientists have long believed that computers’ operating systems should have at their core a small, trustworthy kernel that facilitates communication between the systems’ software and hardware. But operating systems are complicated, and all it takes is a single weak link in the code — one that is virtually impossible to detect via traditional testing — to leave a system vulnerable to hackers.

Read more

Wonder how Tim Cook, Satya & Bill, and Eric and Sergey will respond.


Overseas critics of the law argue it threatens to shut foreign technology companies out of various sectors. PHOTO: REUTERS

BEIJING: China adopted a controversial cybersecurity law on Monday to counter what Beijing says are growing threats such as hacking and terrorism, although the law has triggered concern from foreign business and rights groups.

The legislation, passed by China’s largely rubber-stamp parliament and set to come into effect in June 2017, is an “objective need” of China as a major internet power, a parliament official said.

Read more

Google has built machine learning systems that can create their own cryptographic algorithms — the latest success for AI’s use in cybersecurity. But what are the implications of our digital security increasingly being handed over to intelligent machines?

Google Brain, the company’s California-based AI unit, managed the recent feat by pitting neural networks against each other. Two systems, called Bob and Alice, were tasked with keeping their messages secret from a third, called Eve. None were told how to encrypt messages, but Bob and Alice were given a shared security key that Eve didn’t have access too.

ai-cybersecurity-7

In the majority of tests the pair fairly quickly worked out a way to communicate securely without Eve being able to crack the code. Interestingly, the machines used some pretty unusual approaches you wouldn’t normally see in human generated cryptographic systems, according to TechCrunch.

Read more

Whenever cybersecurity is discussed, the topic of biometric authentication rises alongside it as a better, more effective, more secure method of security. But is it? Do biometrics actually provide a safer way to complete purchase transactions online?

“Biometrics are a device-specific authentication method,” said Madeline Aufseeser, CEO of online fraud prevention company Tender Armor, of the ways biometric authentication is presently used to secure a digital purchase transaction (as opposed to logging into a bank’s web site, to view an account or transfer money). “Typically the same biometric method does not work across multiple purchasing channels today. The fingerprint used to make a purchase with a smartphone cannot necessarily be used to authenticate a phone order purchase or purchase made with a computer. When you confirm [a purchase transaction] with your fingerprint on a smartphone, all that’s saying is that’s the same fingerprint that’s allowed to use this phone, or the specific application on the phone. Because the fingerprint is only resident and stored on the phone, the phone is authenticating itself, not the cardholder conducting the transaction.”

This sounds a little odd compared to what we might have heard about the capabilities of biometrics previously, mainly because it goes against a core assumption: that a biometric identifier (like a fingerprint) goes with transactional data, from the phone or device, to the payment processor, to the merchant.

Read more

Fortifying cybersecurity is on everyone’s mind after the massive DDoS attack from last week. However, it’s not an easy task as the number of hackers evolves the same as security. What if your machine can learn how to protect itself from prying eyes? Researchers from Google Brain, Google’s deep Learning project, has shown that neural networks can learn to create their own form of encryption.

According to a research paper, Martín Abadi and David Andersen assigned Google’s AI to work out how to use a simple encryption technique. Using machine learning, those machines could easily create their own form of encrypted message, though they didn’t learn specific cryptographic algorithms. Albeit, compared to the current human-designed system, that was pretty basic, but an interesting step for neural networks.

To find out whether artificial intelligence could learn to encrypt on its own or not, the Google Brain team built an encryption game with its three different entities: Alice, Bob and Eve, powered by deep learning neural networks. Alice’s task was to send an encrypted message to Bob, Bob’s task was to decode that message, and Eve’s job was to figure out how to eavesdrop and decode the message Alice sent herself.

Read more

For my CISO/ CSO friends.


It is believed that Russia has the Internet that is considered as impenetrable. Such technology protects Russia from hacking attempts.

The World Wide Web (WWW) is prone to hacking, as shown in the recent cyber attacks on the US which led to outages on giants including Twitter, Amazon and Spotify, for which Russia has been largely blamed, so the Eastern European powerhouse has upped its security measures.

The electronic communication system is independent from the WWW and is unable to be connected to unless it is from a verified and licensed computer.

Read more