Toggle light / dark theme

NEC has announced that it will be providing a large-scale facial recognition system for the 2020 Summer Olympic and Paralympic Games in Tokyo. The system will be used to identify over 300,000 people at the Games, including athletes, volunteers, media, and other staff. It’s the first time that facial recognition technology will ever be used for this purpose at an Olympic Games.

NEC’s system is built around an AI engine called NeoFace, which is part of the company’s overarching Bio-IDiom line of biometric authentication technology. The Tokyo 2020 implementation will involve linking photo data with an IC card to be carried by accredited people. NEC says that it has the world’s leading face recognition tech based on benchmark tests from the US’ National Institute of Standards and Technology.

Read more

A BASIC income (BI) is defined as a modest, regular payment to every legal resident in the community, paid unconditionally as a right, regardless of income, employment or relationship status.

Contrary to conventional wisdom, the case for BI does not rest on the assumption that robots and artificial intelligence will cause mass unemployment or that it would be a more efficient way of relieving poverty than present welfare systems (although it would). The main arguments are ethical and relate to social justice, individual freedom and the need for basic security.

Read more

You should already be using two-factor authentication to prevent unauthorized access to your online accounts. While your phone is up to the task of helping you with that, Google believes it’s time for to take the next step: using a physical security key.

At its ongoing Google Cloud Next event, the company announced that it’s launched the Titan Security Key, which lets you log in to your account on your desktop by authenticating your identity with over USB or Bluetooth.

Check out TNW’s Hard Fork.

Read more

If you’ve been hacked in recent years, odds are you fell for that perfectly crafted phishing message in your email. Even the most mindful individuals can slip up, but Google’s employees have reportedly had a flawless security record for more than a year thanks to a recent policy requiring them to use physical security keys.

Krebs on Security reports that in early 2017, Google started requiring its 85,000 employees to use a security key device to handle two-factor authentication when logging into their various accounts. Rather than just having a single password, or receiving a secondary access code via text message (or an app such as Google Authenticator), the employees had to use a traditional password as well as plug in a device that only they possessed. The results were stellar. From the report:

A Google spokesperson said Security Keys now form the basis of all account access at Google.

Read more

About the future death of explainability to understand AI thinking, the writing is on the wall…


These divergent approaches, one regulatory, the other deregulatory, follow the same pattern as antitrust enforcement, which faded in Washington and began flourishing in Brussels during the George W. Bush administration. But there is a convincing case that when it comes to overseeing the use and abuse of algorithms, neither the European nor the American approach has much to offer. Automated decision-making has revolutionized many sectors of the economy and it brings real gains to society. It also threatens privacy, autonomy, democratic practice, and ideals of social equality in ways we are only beginning to appreciate.

At the simplest level, an algorithm is a sequence of steps for solving a problem. The instructions for using a coffeemaker are an algorithm for converting inputs (grounds, filter, water) into an output (coffee). When people say they’re worried about the power of algorithms, however, they’re talking about the application of sophisticated, often opaque, software programs to enormous data sets. These programs employ advanced statistical methods and machine-learning techniques to pick out patterns and correlations, which they use to make predictions. The most advanced among them, including a subclass of machine-learning algorithms called “deep neural networks,” can infer complex, nonlinear relationships that they weren’t specifically programmed to find.

Predictive algorithms are increasingly central to our lives. They determine everything from what ads we see on the Internet, to whether we are flagged for increased security screening at the airport, to our medical diagnoses and credit scores. They lie behind two of the most powerful products of the digital information age: Google Search and Facebook’s Newsfeed. In many respects, machine-learning algorithms are a boon to humanity; they can map epidemics, reduce energy consumption, perform speech recognition, and predict what shows you might like on Netflix. In other respects, they are troubling. Facebook uses AI algorithms to discern the mental and emotional states of its users. While Mark Zuckerberg emphasizes the application of this technique to suicide prevention, opportunities for optimizing advertising may provide the stronger commercial incentive.

Read more

Xage (pronounced Zage), a blockchain security startup based in Silicon Valley, announced a $12 million Series A investment today led by March Capital Partners. GE Ventures, City Light Capital and NexStar Partners also participated.

The company emerged from stealth in December with a novel idea to secure the myriad of devices in the industrial internet of things on the blockchain. Here’s how I described it in a December 2017 story:

Xage is building a security fabric for IoT, which takes blockchain and synthesizes it with other capabilities to create a secure environment for devices to operate. If the blockchain is at its core a trust mechanism, then it can give companies confidence that their IoT devices can’t be compromised. Xage thinks that the blockchain is the perfect solution to this problem.

Read more

Quantum communication and cryptography are the future of high-security communication. But many challenges lie ahead before a worldwide quantum network can be set up, including propagating the quantum signal over long distances. One of the major challenges is to create memories with the capacity to store quantum information carried by light. Researchers at the University of Geneva (UNIGE), Switzerland, in partnership with CNRS, France, have discovered a new material in which an element, ytterbium, can store and protect the fragile quantum information even while operating at high frequencies. This makes ytterbium an ideal candidate for future quantum networks, where the aim is to propagate the signal over long distances by acting as repeaters. These results are published in the journal Nature Materials.

Quantum cryptography today uses optical fibre over several hundred kilometres and is marked by its high degree of security: it is impossible to copy or intercept information without making it disappear.

However, the fact that it is impossible to copy the signal also prevents scientists from amplifying it to diffuse it over long distances, as is the case with the Wi-Fi network.

Read more