Toggle light / dark theme

Physicists from Trinity have unlocked the secret that explains how large groups of individual “oscillators”—from flashing fireflies to cheering crowds, and from ticking clocks to clicking metronomes—tend to synchronize when in each other’s company.

Their work, just published in the journal Physical Review Research, provides a mathematical basis for a phenomenon that has perplexed millions—their newly developed equations help explain how individual randomness seen in the and in electrical and computer systems can give rise to synchronization.

We have long known that when one clock runs slightly faster than another, physically connecting them can make them tick in time. But making a large assembly of clocks synchronize in this way was thought to be much more difficult—or even impossible, if there are too many of them.

Lightning is one of the most destructive forces of nature, as in 2020 when it sparked the massive California Lightning Complex fires, but it remains hard to predict. A new study led by the University of Washington shows that machine learning—computer algorithms that improve themselves without direct programming by humans—can be used to improve lightning forecasts.

Better lightning forecasts could help to prepare for potential wildfires, improve safety warnings for lightning and create more accurate long-range climate models.

“The best subjects for machine learning are things that we don’t fully understand. And what is something in the atmospheric sciences field that remains poorly understood? Lightning,” said Daehyun Kim, a UW associate professor of atmospheric sciences. “To our knowledge, our work is the first to demonstrate that machine learning algorithms can work for lightning.”

WASHINGTON, D.C. — Today, the U.S. Department of Energy (DOE) announced $5.7 million for six projects that will implement artificial intelligence methods to accelerate scientific discovery in nuclear physics research. The projects aim to optimize the overall performance of complex accelerator and detector systems for nuclear physics using advanced computational methods.

“Artificial intelligence has the potential to shorten the timeline for experimental discovery in nuclear physics,” said Timothy Hallman, DOE Associate Director of Science for Nuclear Physics. “Particle accelerator facilities and nuclear physics instrumentation face a variety of technical challenges in simulations, control, data acquisition, and analysis that artificial intelligence holds promise to address.”

The six projects will be conducted by nuclear physics researchers at five DOE national laboratories and four universities. Projects will include the development of deep learning algorithms to identify a unique signal for a conjectured, very slow nuclear process known as neutrinoless double beta decay. This decay, if observed, would be at least ten thousand times more rare than the rarest known nuclear decay and could demonstrate how our universe became dominated by matter rather than antimatter. Supported efforts also include AI-driven detector design for the Electron-Ion Collider accelerator project under construction at Brookhaven National Laboratory that will probe the internal structure and forces of protons and neutrons that compose the atomic nucleus.

The accelerated growth in ecommerce and online marketplaces has led to a surge in fraudulent behavior online perpetrated by bots and bad actors alike. A strategic and effective approach to online fraud detection will be needed in order to tackle increasingly sophisticated threats to online retailers.

These market shifts come at a time of significant regulatory change. Across the globe, new legislation is coming into force that alters the balance of responsibility in fraud prevention between users, brands, and the platforms that promote them digitally. For example, the EU Digital Services Act and US Shop Safe Act will require online platforms to take greater responsibility for the content on their websites, a responsibility that was traditionally the domain of brands and users to monitor and report.

Can AI find what’s hiding in your data? In the search for security vulnerabilities, behavioral analytics software provider Pasabi has seen a sharp rise in interest in its AI analytics platform for online fraud detection, with a number of key wins including the online reviews platform, Trustpilot. Pasabi maintains its AI models based on anonymised sets of data collected from multiple sources.

Using bespoke models and algorithms, as well as some open source and commercial technology such as TensorFlow and Neo4j, Pasabi’s platform is proving itself to be advantageous in the detection of patterns in both text and visual data. Customer data is provided to Pasabi by its customers for the purposes of analysis to identify a range of illegal activities — - illegal content, scams, and counterfeits, for example — - upon which the customer can then act.

Full Story:

Strategy accelerates the best algorithmic solvers for large sets of cities.

Waiting for a holiday package to be delivered? There’s a tricky math problem that needs to be solved before the delivery truck pulls up to your door, and MIT researchers have a strategy that could speed up the solution.

The approach applies to vehicle routing problems such as last-mile delivery, where the goal is to deliver goods from a central depot to multiple cities while keeping travel costs down. While there are algorithms designed to solve this problem for a few hundred cities, these solutions become too slow when applied to a larger set of cities.

The solver algorithms work by breaking up the problem of delivery into smaller subproblems to solve — say, 200 subproblems for routing vehicles between 2,000 cities. Wu and her colleagues augment this process with a new machine-learning algorithm that identifies the most useful subproblems to solve, instead of solving all the subproblems, to increase the quality of the solution while using orders of magnitude less compute.

Their approach, which they call “learning-to-delegate,” can be used across a variety of solvers and a variety of similar problems, including scheduling and pathfinding for warehouse robots, the researchers say.

Full Story:

While they wrestle with the immediate danger posed by hackers today, US government officials are preparing for another, longer-term threat: attackers who are collecting sensitive, encrypted data now in the hope that they’ll be able to unlock it at some point in the future.

The threat comes from quantum computers, which work very differently from the classical computers we use today. Instead of the traditional bits made of 1s and 0s, they use quantum bits that can represent different values at the same time. The complexity of quantum computers could make them much faster at certain tasks, allowing them to solve problems that remain practically impossible for modern machines—including breaking many of the encryption algorithms currently used to protect sensitive data such as personal, trade, and state secrets.

While quantum computers are still in their infancy, incredibly expensive and fraught with problems, officials say efforts to protect the country from this long-term danger need to begin right now.

Dante’s Divine Comedy has inspired countless artists, from William Blake to Franz Liszt, and from Auguste Rodin to CS Lewis. But an exhibition marking the 700th anniversary of the Italian poet’s death will be showcasing the work of a rather more modern devotee: Ai-Da the robot, which will make history by becoming the first robot to publicly perform poetry written by its AI algorithms.

The ultra-realistic Ai-Da, who was devised in Oxford by Aidan Meller and named after computing pioneer Ada Lovelace, was given the whole of Dante’s epic three-part narrative poem, the Divine Comedy, to read, in JG Nichols’ English translation. She then used her algorithms, drawing on her data bank of words and speech pattern analysis, to produce her own reactive work to Dante’s.

Full Story:

Black holes are one of the greatest mysteries of the universe—for example, a black hole with the mass of our sun has a radius of only 3 kilometers. Black holes in orbit around each other emit gravitational radiation—oscillations of space and time predicted by Albert Einstein in 1916. This causes the orbit to become faster and tighter, and eventually, the black holes merge in a final burst of radiation. These gravitational waves propagate through the universe at the speed of light, and are detected by observatories in the U.S. (LIGO) and Italy (Virgo). Scientists compare the data collected by the observatories against theoretical predictions to estimate the properties of the source, including how large the black holes are and how fast they are spinning. Currently, this procedure takes at least hours, often months.

An interdisciplinary team of researchers from the Max Planck Institute for Intelligent Systems (MPI-IS) in Tübingen and the Max Planck Institute for Gravitational Physics (Albert Einstein Institute/AEI) in Potsdam is using state-of-the-art machine learning methods to speed up this process. They developed an algorithm using a , a complex computer code built from a sequence of simpler operations, inspired by the human brain. Within seconds, the system infers all properties of the binary black-hole source. Their research results are published today in Physical Review Letters.

“Our method can make very accurate statements in a few seconds about how big and massive the two were that generated the gravitational waves when they merged. How fast do the black holes rotate, how far away are they from Earth and from which direction is the gravitational wave coming? We can deduce all this from the observed data and even make statements about the accuracy of this calculation,” explains Maximilian Dax, first author of the study Real-Time Gravitational Wave Science with Neural Posterior Estimation and Ph.D. student in the Empirical Inference Department at MPI-IS.

Alphabet’s AI research company DeepMind has released the next generation of its language model, and it says that it has close to the reading comprehension of a high schooler — a startling claim.

It says the language model, called Gopher, was able to significantly improve its reading comprehension by ingesting massive repositories of texts online.

DeepMind boasts that its algorithm, an “ultra-large language model,” has 280 billion parameters, which are a measure of size and complexity. That means it falls somewhere between OpenAI’s GPT-3 (175 billion parameters) and Microsoft and NVIDIA’s Megatron, which features 530 billion parameters, The Verge points out.