Toggle light / dark theme

AI has finally come full circle.

A new suite of algorithms by Google Brain can now design computer chips —those specifically tailored for running AI software —that vastly outperform those designed by human experts. And the system works in just a few hours, dramatically slashing the weeks-or months-long process that normally gums up digital innovation.

At the heart of these robotic chip designers is a type of machine learning called deep reinforcement learning. This family of algorithms, loosely based on the human brain’s workings, has triumphed over its biological neural inspirations in games such as Chess, Go, and nearly the entire Atari catalog.

Circa 2019


As quantum computing enters the industrial sphere, questions about how to manufacture qubits at scale are becoming more pressing. Here, Fernando Gonzalez-Zalba, Tsung-Yeh Yang and Alessandro Rossi explain why decades of engineering may give silicon the edge.

In the past two decades, quantum computing has evolved from a speculative playground into an experimental race. The drive to build real machines that exploit the laws of quantum mechanics, and to use such machines to solve certain problems much faster than is possible with traditional computers, will have a major impact in several fields. These include speeding up drug discovery by efficiently simulating chemical reactions; better uses of “big data” thanks to faster searches in unstructured databases; and improved weather and financial-market forecasts via smart optimization protocols.

We are still in the early stages of building these quantum information processors. Recently, a team at Google has reportedly demonstrated a quantum machine that outperforms classical supercomputers, although this so-called “quantum supremacy” is expected to be too limited for useful applications. However, this is an important milestone in the field, testament to the fact that progress has become substantial and fast paced. The prospect of significant commercial revenues has now attracted the attention of large computing corporations. By channelling their resources into collaborations with academic groups, these firms aim to push research forward at a faster pace than either sector could accomplish alone.

“These are novel living machines. They are not a traditional robot or a known species of animals. It is a new class of artifacts: a living and programmable organism,” says Joshua Bongard, an expert in computer science and robotics at the University of Vermont (UVM) and one of the leaders of the find.

As the scientist explains, these living bots do not look like traditional robots : they do not have shiny gears or robotic arms. Rather, they look more like a tiny blob of pink meat in motion, a biological machine that researchers say can accomplish things traditional robots cannot.

Xenobots are synthetic organisms designed automatically by a supercomputer to perform a specific task, using a process of trial and error (an evolutionary algorithm), and are built by a combination of different biological tissues.

“Conditional witnessing” technique makes many-body entangled states easier to measure.


Quantum error correction – a crucial ingredient in bringing quantum computers into the mainstream – relies on sharing entanglement between many particles at once. Thanks to researchers in the UK, Spain and Germany, measuring those entangled states just got a lot easier. The new measurement procedure, which the researchers term “conditional witnessing”, is more robust to noise than previous techniques and minimizes the number of measurements required, making it a valuable method for testing imperfect real-life quantum systems.

Quantum computers run their algorithms on quantum bits, or qubits. These physical two-level quantum systems play an analogous role to classical bits, except that instead of being restricted to just “0” or “1” states, a single qubit can be in any combination of the two. This extra information capacity, combined with the ability to manipulate quantum entanglement between qubits (thus allowing multiple calculations to be performed simultaneously), is a key advantage of quantum computers.

The problem with qubits

However, qubits are fragile. Virtually any interaction with their environment can cause them to collapse like a house of cards and lose their quantum correlations – a process called decoherence. If this happens before an algorithm finishes running, the result is a mess, not an answer. (You would not get much work done on a laptop that had to restart every second.) In general, the more qubits a quantum computer has, the harder they are to keep quantum; even today’s most advanced quantum processors still have fewer than 100 physical qubits.

Army researchers have developed a pioneering framework that provides a baseline for the development of collaborative multi-agent systems.

The framework is detailed in the survey paper “Survey of recent multi-agent learning algorithms utilizing centralized training,” which is featured in the SPIE Digital Library. Researchers said the work will support research in reinforcement learning approaches for developing collaborative multi-agent systems such as teams of robots that could work side-by-side with future soldiers.

“We propose that the underlying information sharing mechanism plays a critical role in centralized learning for multi-agent systems, but there is limited study of this phenomena within the research community,” said Army researcher and computer scientist Dr. Piyush K. Sharma of the U.S. Army Combat Capabilities Development Command, known as DEVCOM, Army Research Laboratory. “We conducted this survey of the state-of-the-art in reinforcement learning algorithms and their information sharing paradigms as a basis for asking fundamental questions on centralized learning for multi-agent systems that would improve their ability to work together.”

New EPFL research has found that almost half of local Twitter trending topics in Turkey are fake, a scale of manipulation previously unheard of. It also proves for the first time that many trends are created solely by bots due to a vulnerability in Twitter’s Trends algorithm.

Social media has become ubiquitous in our modern, daily lives. It has changed the way that people interact, connecting us in previously unimaginable ways. Yet, where once our social media networks probably consisted of a small circle of friends most of us are now part of much larger communities that can influence what we read, do, and even think.

One influencing mechanism, for example, is “Twitter Trends.” The platform uses an algorithm to determine hashtag-driven topics that become popular at a given point in time, alerting to the top words, phrases, subjects and popular hashtags globally and locally.

The researchers started with a sample taken from the temporal lobe of a human cerebral cortex, measuring just 1 mm3. This was stained for visual clarity, coated in resin to preserve it, and then cut into about 5300 slices each about 30 nanometers (nm) thick. These were then imaged using a scanning electron microscope, with a resolution down to 4 nm. That created 225 million two-dimensional images, which were then stitched back together into one 3D volume.

Machine learning algorithms scanned the sample to identify the different cells and structures within. After a few passes by different automated systems, human eyes “proofread” some of the cells to ensure the algorithms were correctly identifying them.

The end result, which Google calls the H01 dataset, is one of the most comprehensive maps of the human brain ever compiled. It contains 50000 cells and 130 million synapses, as well as smaller segments of the cells such axons, dendrites, myelin and cilia. But perhaps the most stunning statistic is that the whole thing takes up 1.4 petabytes of data – that’s more than a million gigabytes.