Toggle light / dark theme

Not everything about glass is clear. How its atoms are arranged and behave, in particular, is startlingly opaque.

The problem is that glass is an amorphous solid, a class of materials that lies in the mysterious realm between solid and liquid. Glassy materials also include polymers, or commonly used plastics. While it might appear to be stable and static, glass’ atoms are constantly shuffling in a frustratingly futile search for equilibrium. This shifty behavior has made the physics of glass nearly impossible for researchers to pin down.

Now a multi-institutional team including Northwestern University, North Dakota State University and the National Institute of Standards and Technology (NIST) has designed an algorithm with the goal of giving polymeric glasses a little more clarity. The algorithm makes it possible for researchers to create coarse-grained models to design materials with dynamic properties and predict their continually changing behaviors. Called the “energy renormalization algorithm,” it is the first to accurately predict glass’ mechanical behavior at and could result in the fast discovery of new materials, designed with optimal properties.

Read more

Artificial intelligence has been showing us many ish tricks as apers of human-created art, and now a team of researchers have impressed AI watchers with PaintBot. They have managed to unleash their AI as a capable mimic of the old masters.

AI can deliver a Van Gogh–ish, Vermeer–ish, Turner–ish painting. The team, from the University of Maryland, the ByteDance AI Lab and Adobe Research, turned an algorithm into a mimic of the old masters.

“Through a coarse-to-fine refinement process our agent can paint arbitrarily complex images in the desired style.”

Read more

In a focus section published in the journal Seismological Research Letters, researchers describe how they are using machine learning methods to hone predictions of seismic activity, identify earthquake centers, characterize different types of seismic waves and distinguish seismic activity from other kinds of ground “noise.”

Machine learning refers to a set of algorithms and models that allow computers to identify and extract patterns of information from large data sets. Machine learning methods often discover these patterns from the data themselves, without reference to the real-world, physical mechanisms represented by the data. The methods have been used successfully on problems such as digital image and speech recognition, among other applications.

More seismologists are using the methods, driven by “the increasing size of seismic data sets, improvements in computational power, new algorithms and architecture and the availability of easy-to-use open source machine learning frameworks,” write focus section editors Karianne Bergen of Harvard University, Ting Cheng of Los Alamos National Laboratory, and Zefeng Li of Caltech.

Read more

This particular version of Dadabots has been trained on real death metal band Archspire, and Carr and Zukowski have previously trained the neural network on other real bands like Room For A Ghost, Meshuggah, and Krallice. In the past, they’ve released albums made by these algorithms for free on Dadabots’ Bandcamp — but having a 24/7 algorithmic death metal livestream is something new.

Carr and Zukowski published an abstract about their work in 2017, explaining that “most style-specific generative music experiments have explored artists commonly found in harmony textbooks,” meaning mostly classical music, and have largely ignored smaller genres like black metal. In the paper, the duo said the goal was to have the AI “achieve a realistic recreation” of the audio fed into it, but it ultimately gave them something perfectly imperfect. “Solo vocalists become a lush choir of ghostly voices,” they write. “Rock bands become crunchy cubist-jazz, and cross-breeds of multiple recordings become a surrealist chimera of sound.”

Carr and Zukowski tell Motherboard they hope to have some kind of audience interaction with Dadabots in the future. For now, you can listen to it churn out nonstop death metal and comment along with other people watching the livestream on YouTube.

Read more

The spoken word is a powerful tool, but not all of us have the ability to use it, either due to biology or circumstances. In such cases, technology can bridge the gap — and now that gap is looking shorter than ever, with a new algorithm that turns messages meant for your muscles into legible sounds.

Converting the complex mix of information sent from the brain to the orchestra of body parts required to transform a puff of air into meaningful sound is by no means a simple feat.

The lips, tongue, throat, jaw, larynx, and diaphragm all need to work together in near-perfect synchrony, requiring our brain to become a master conductor when it comes to uttering even the simplest of phrases.

Read more

For years, post traumatic stress disorder (PTSD) has been one of the most challenging disorders to diagnose. Traditional methods, like one-on-one clinical interviews, can be inaccurate due to the clinician’s subjectivity, or if the patient is holding back their symptoms.

Now, researchers at New York University say they’ve taken the guesswork out of diagnosing PTSD in veterans by using artificial intelligence to objectively detect PTSD by listening to the sound of someone’s voice. Their research, conducted alongside SRI International — the research institute responsible for bringing Siri to iPhones— was published Monday in the journal Depression and Anxiety.

According to The New York Times, SRI and NYU spent five years developing a voice analysis program that understands human speech, but also can detect PTSD signifiers and emotions. As the NYT reports, this is the same process that teaches automated customer service programs how to deal with angry callers: By listening for minor variables and auditory markers that would be imperceptible to the human ear, the researchers say the algorithm can diagnose PTSD with 89% accuracy.

Read more

The multiplication of integers is a problem that has kept mathematicians busy since Antiquity. The “Babylonian” method we learn at school requires us to multiply each digit of the first number by each digit of the second one. But when both numbers have a billion digits each, that means a billion times a billion or 1018 operations.

At a rate of a billion operations per second, it would take a computer a little over 30 years to finish the job. In 1971, the mathematicians Schönhage and Strassen discovered a quicker way, cutting calculation time down to about 30 seconds on a modern laptop. In their article, they also predicted that another algorithm—yet to be found—could do an even faster job. Joris van der Hoeven, a CNRS researcher from the École Polytechnique Computer Science Laboratory LIX, and David Harvey from the University of New South Wales (Australia) have found that algorithm.

They present their work in a new article that is available to the through the online HAL archive. But one problem raised by Schönhage et Strassen remains to be solved: proving that no quicker method exists. This poses a new challenge for theoretical science.

Read more

Qualcomm said it plans to begin testing its new Cloud AI 100 chip with partners such as Microsoft Corp later this year, with mass production likely to begin in 2020.

Qualcomm’s new chip is designed for what artificial intelligence researchers call “inference” – the process of using an AI algorithm that has been “trained” with massive amounts of data in order to, for example, translate audio into text-based requests.

Analysts believe chips for speeding up inference will be the largest part of the AI chip market.

Read more