Toggle light / dark theme

If you’ve read anything about quantum computers, you may have encountered the statement, “It’s like computing with zero and one at the same time.” That’s sort of true, but what makes quantum computers exciting is something spookier: entanglement.

A new quantum device entangles 20 quantum bits together at the same time, making it perhaps one of the most entangled, controllable devices yet. This is an important milestone in the quantum computing world, but it also shows just how much more work there is left to do before we can realize the general-purpose quantum computers of the future, which will be able to solve big problems relating to AI and cybersecurity that classical computers can’t.

“We’re now getting access to single-particle-control devices” with tens of qubits, study author Ben Lanyon from the Institute for Quantum Optics and Quantum Information in Austria told Gizmodo. Soon, “we can get to the level where we can create super-exotic quantum states and see how they behave in the lab. I think that’s very exciting.”

Read more

Each new technological breakthrough comes seemingly prepackaged with a new way for hackers to kill us all: self-driving cars, space-based weapons, and even nuclear security systems are vulnerable to someone with the right knowledge and a bit of code. Now, deep-learning artificial intelligence looks like the next big threat, and not because it will gain sentience to murder us with robots (as Elon Musk has warned): a group of computer scientists from the US and China recently published a paper proposing the first-ever trojan for a neural network.

Neural networks are the primary tool used in AI to accomplish “deep learning,” which has allowed AIs to master complex tasks like playing chess and Go. Neural networks function similar to a human brain, which is how they got the name. Information passes through layers of neuron-like connections, which then analyze the information and spit out a response. These networks can pull off difficult tasks like image recognition, including identifying faces and objects, which makes them useful for self-driving cars (to identify stop signs and pedestrians) and security (which may involve identifying an authorized user’s face). Neural networks are relatively novel pieces of tech and aren’t commonly used by the public yet but, as deep-learning AI becomes more prevalent, it will likely become an appealing target for hackers.

The trojan proposed in the paper, called “PoTrojan,” could be included in a neural network product either from the beginning or inserted later as a slight modification. Like a normal trojan, it looks like a normal piece of the software, doesn’t copy itself, and doesn’t do much of anything… Until the right triggers happen. Once the right inputs are activated in a neural network, this trojan hijacks the operation and injects its own train of “thought,” making sure the network spits out the answer it wants. This could take the form of rejecting the face of a genuine user and denying them access to their device, or purposefully failing to recognize a stop sign to create a car crash.

Read more

DUBAI (Reuters) — Hackers have attacked networks in a number of countries including data centers in Iran where they left the image of a U.S. flag on screens along with a warning: “Don’t mess with our elections”, the Iranian IT ministry said on Saturday.

FILE PHOTO: A man types on a computer keyboard in front of the displayed cyber code in this illustration picture taken on March 1, 2017. REUTERS/Kacper Pempel/Illustration/File Photo.

Read more

We are now a connected global community where many digital natives cannot remember a time before the iPhone. The rise of smart homes means that we are increasingly attaching our lighting, door locks, cameras, thermostats, and even toasters to our home networks. Managing our home automation through mobile apps or our voice illustrates how far we have evolved over the last few years.

However, in our quest for the cool and convenient, many have not stopped to consider their cybersecurity responsibilities. The device with the weakest security could allow hackers to exploit vulnerabilities on our network and access our home. But this is just the tip of the proverbial iceberg.

Businesses and even governments are starting to face up to the vulnerabilities of everything being online. Sophisticated and disruptive cyberattacks are continuing to increase in complexity and scale across multiple industries. Areas of our critical infrastructure such as energy, nuclear, water, aviation, and critical manufacturing have vulnerabilities that make them a target for cybercriminals and even a state-sponsored attack.

Read more

“Be very, very afraid. As this extraordinary book reveals, we are fast sailing into an era in which big life-and-death decisions in war will be made not by men…and women, but by artificial intelligence” — @stavridisj’s review of @paul_scharre upcoming book Pre-order yours now:


A Pentagon defense expert and former U.S. Army Ranger explores what it would mean to give machines authority over the ultimate decision of life or death.

What happens when a Predator drone has as much autonomy as a Google car? Or when a weapon that can hunt its own targets is hacked? Although it sounds like science fiction, the technology already exists to create weapons that can attack targets without human input. Paul Scharre, a leading expert in emerging weapons technologies, draws on deep research and firsthand experience to explore how these next-generation weapons are changing warfare.

Scharre’s far-ranging investigation examines the emergence of autonomous weapons, the movement to ban them, and the legal and ethical issues surrounding their use. He spotlights artificial intelligence in military technology, spanning decades of innovation from German noise-seeking Wren torpedoes in World War II―antecedents of today’s homing missiles―to autonomous cyber weapons, submarine-hunting robot ships, and robot tank armies. Through interviews with defense experts, ethicists, psychologists, and activists, Scharre surveys what challenges might face “centaur warfighters” on future battlefields, which will combine human and machine cognition. We’ve made tremendous technological progress in the past few decades, but we have also glimpsed the terrifying mishaps that can result from complex automated systems―such as when advanced F-22 fighter jets experienced a computer meltdown the first time they flew over the International Date Line.

Read more

A longer-term concern is the way AI creates a virtuous circle or “flywheel” effect, allowing companies that embrace it to operate more efficiently, generate more data, improve their services, attract more customers and offer lower prices. That sounds like a good thing, but it could also lead to more corporate concentration and monopoly power—as has already happened in the technology sector.


LIE DETECTORS ARE not widely used in business, but Ping An, a Chinese insurance company, thinks it can spot dishonesty. The company lets customers apply for loans through its app. Prospective borrowers answer questions about their income and plans for repayment by video, which monitors around 50 tiny facial expressions to determine whether they are telling the truth. The program, enabled by artificial intelligence (AI), helps pinpoint customers who require further scrutiny.

AI will change more than borrowers’ bank balances. Johnson & Johnson, a consumer-goods firm, and Accenture, a consultancy, use AI to sort through job applications and pick the best candidates. AI helps Caesars, a casino and hotel group, guess customers’ likely spending and offer personalised promotions to draw them in. Bloomberg, a media and financial-information firm, uses AI to scan companies’ earnings releases and automatically generate news articles. Vodafone, a mobile operator, can predict problems with its network and with users’ devices before they arise. Companies in every industry use AI to monitor cyber-security threats and other risks, such as disgruntled employees.

Get our daily newsletter

Upgrade your inbox and get our Daily Dispatch and Editor’s Picks.

Read more

It’s small enough to fit inside a shoebox, yet this robot on four wheels has a big mission: keeping factories and other large facilities safe from hackers.

Meet the HoneyBot.

Developed by a team of researchers at the Georgia Institute of Technology, the diminutive device is designed to lure in digital troublemakers who have set their sights on industrial facilities. HoneyBot will then trick the bad actors into giving up valuable information to cybersecurity professionals.

Read more