Toggle light / dark theme

Rarely does scientific software spark such sensational headlines. “One of biology’s biggest mysteries ‘largely solved’ by AI”, declared the BBC. Forbes called it “the most important achievement in AI — ever”. The buzz over the November 2020 debut of AlphaFold2, Google DeepMind’s (AI) system for predicting the 3D structure of proteins, has only intensified since the tool was made freely available in July.

The excitement relates to the software’s potential to solve one of biology’s thorniest problems — predicting the functional, folded structure of a protein molecule from its linear amino-acid sequence, right down to the position of each atom in 3D space. The underlying physicochemical rules for how proteins form their 3D structures remain too complicated for humans to parse, so this ‘protein-folding problem’ has remained unsolved for decades.

Researchers have worked out the structures of around 160,000 proteins from all kingdoms of life. They have been using experimental techniques, such as X-ray crystallography and cryo-electron microscopy (cryo-EM), and then depositing their 3D information in the Protein Data Bank. Computational biologists have made steady gains in developing software that complements these methods, and have correctly predicted the 3D shapes of some molecules from well-studied protein families.

Soot is one of the world’s worst contributors to climate change. Its impact is similar to global methane emissions and is second only to carbon dioxide in its destructive potential. This is because soot particles absorb solar radiation, which heats the surrounding atmosphere, resulting in warmer global temperatures. Soot also causes several other environmental and health problems including making us more susceptible to respiratory viruses.

Soot only persists in the atmosphere for a few weeks, suggesting that if these emissions could be stopped then the air could rapidly clear. This has recently been demonstrated during recent lockdowns, with some major cities reporting clear skies after industrial emissions stopped.

But is also part of our future. Soot can be converted into the useful carbon black product through thermal treatment to remove any harmful components. Carbon blacks are critical ingredients in batteries, tires and paint. If these carbons are made small enough they can even be made to fluoresce and have been used for tagging , in catalysts and even in solar cells.

Rich dynamics in a living neuronal system can be considered as a computational resource for physical reservoir computing (PRC). However, PRC that generates a coherent signal output from a spontaneously active neuronal system is still challenging. To overcome this difficulty, we here constructed a closed-loop experimental setup for PRC of a living neuronal culture, where neural activities were recorded with a microelectrode array and stimulated optically using caged compounds. The system was equipped with first-order reduced and controlled error learning to generate a coherent signal output from a living neuronal culture. Our embodiment experiments with a vehicle robot demonstrated that the coherent output served as a homeostasis-like property of the embodied system from which a maze-solving ability could be generated. Such a homeostatic property generated from the internal feedback loop in a system can play an important role in task solving in biological systems and enable the use of computational resources without any additional learning.

https://www.youtube.com/watch?v=NM7hdDZN2YI

We explore Artificial Intelligence (AI) through Neuromorphic Computing with computer chips that emulate the biological neurons and synapses in the brain. Neuro-biological chip architectures enable machines to solve very different kinds of problems than traditional computers, the kinds of problems we previously thought only humans could tackle.

Our guest today is Kelsey Scharnhorst. Kelsey is an Artificial Neural Network Researcher at UCLA. Her research lab (Gimzewski Lab under James Gimzewski) is focused on creating neuromorphic computer chips and further developing their capabilities.

We’ll talk with Kelsey about how neuromorphic computing is different, how neural-biological computer architecture works, and how it will be used in the future.

Podcast version at: https://is.gd/MM_on_iTunes.

Gimzewski Lab (UCLA Neuromorphic Lab): http://gim.chem.ucla.edu.
Kelsey on LinkedIn: https://www.linkedin.com/in/kelseyscharnhorst.
__________

If you own any piece of jewelry with a ruby, you’re probably never going to look at it the same way again.

Forget those perfect gemstones you see glittering in store displays. What scientists are looking for are the flawed ones — the ones that contain inclusions which can whisper the secrets of Earth’s distant past, like that tardigrade trapped in amber. When researcher Chris Yakymchuk and his team unearthed a peculiar ruby in Greenland, the inclusion they found was what remained of life that was over 2.5 billion years old.

What was inside the ruby sounds common enough. Graphite is the same material pencils write with, but it is also a pure form of carbon that Yakymchuk determined to be all that was left of prehistoric microbes, possibly the same cyanobacteria (blue-green algae) that first released oxygen into Earth’s atmosphere through photosynthesis. He led a study recently published in Ore Geology Reviews.

Uncovering the mechanisms of learning via synaptic plasticity is a critical step towards understanding how our brains function and building truly intelligent, adaptive machines. Researchers from the University of Bern propose a new approach in which algorithms mimic biological evolution and learn efficiently through creative evolution.

Our brains are incredibly adaptive. Every day, we form , acquire new knowledge, or refine existing skills. This stands in marked contrast to our current computers, which typically only perform pre-programmed actions. At the core of our adaptability lies . Synapses are the connection points between neurons, which can change in different ways depending on how they are used. This synaptic plasticity is an important research topic in neuroscience, as it is central to learning processes and memory. To better understand these processes and build adaptive machines, researchers in the fields of neuroscience and (AI) are creating models for the mechanisms underlying these processes. Such models for learning and plasticity help to understand biological information processing and should also enable machines to learn faster.

Princeton researchers have invented bubble casting, a new way to make soft robots using “fancy balloons” that change shape in predictable ways when inflated with air.

The new system involves injecting bubbles into a liquid polymer, letting the material solidify and inflating the resulting device to make it bend and move. The researchers used this approach to design and create hands that grip, a fishtail that flaps and slinky-like coils that retrieve a ball. They hope that their simple and versatile method, published Nov. 10 in the journal Nature, will accelerate the development of new types of soft robots.

Traditional rigid robots have multiple uses, such as in manufacturing cars. “But they will not be able to hold your hands and allow you to move somewhere without breaking your wrist,” said Pierre-Thomas Brun, an assistant professor of chemical and and the lead researcher on the study. “They’re not naturally geared to interact with the soft stuff, like humans or tomatoes.”

A new analytical technique is able to provide hitherto unattainable insights into the extremely rapid dynamics of biomolecules. The team of developers, led by Abbas Ourmazd from the University of Wisconsin–Milwaukee and Robin Santra from DESY

Commonly abbreviated as DESY, the Deutsches Elektronen-Synchrotron (English German Electron Synchrotron) is a national research center in Germany that operates particle accelerators used to investigate the structure of matter. It is a member of the Helmholtz Association and operates at sites in Hamburg and Zeuthen.