Toggle light / dark theme

Education Saturday with Space Time.


It’s not surprising that the profound weirdness of the quantum world has inspired some outlandish explanations – nor that these have strayed into the realm of what we might call mysticism. One particularly pervasive notion is the idea that consciousness can directly influence quantum systems – and so influence reality. Today we’re going to see where this idea comes from, and whether quantum theory really supports it.

The behavior of the quantum world is beyond weird. Objects being in multiple places at once, communicating faster than light, or simultaneously experiencing multiple entire timelines … that then talk to each other. The rules governing the tiny quantum world of atoms and photons seem alien. And yet we have a set of rules that give us incredible power in predicting the behavior of a quantum system – rules encapsulated in the mathematics of quantum mechanics. Despite its stunning success, we’re now nearly a century past the foundation of quantum mechanics and physicists are still debating how to interpret its equations and the weirdness they represent.

Researchers at the University of Massachusetts and the Air Force Research Laboratory Information Directorate have recently created a 3D computing circuit that could be used to map and implement complex machine learning algorithms, such convolutional neural networks (CNNs). This 3D circuit, presented in a paper published in Nature Electronics, comprises eight layers of memristors; electrical components that regulate the electrical current flowing in a circuit and directly implement neural network weights in hardware.

“Previously, we developed a very reliable memristive device that meets most requirements of in-memory computing for artificial neural networks, integrated the devices into large 2-D arrays and demonstrated a wide variety of machine intelligence applications,” Prof. Qiangfei Xia, one of the researchers who carried out the study, told TechXplore. “In our recent study, we decided to extend it to the third dimension, exploring the benefit of a rich connectivity in a 3D neural .”

Essentially, Prof. Xia and his team were able to experimentally demonstrate a 3D computing circuit with eight memristor layers, which can all be engaged in computing processes. Their circuit differs greatly from other previously developed 3D , such as 3D NAND flash, as these systems are usually comprised of layers with different functions (e.g. a sensor layer, a computing layer, a control layer, etc.) stacked or bonded together.

The news: In a fresh spin on manufactured pop, OpenAI has released a neural network called Jukebox that can generate catchy songs in a variety of different styles, from teenybop and country to hip-hop and heavy metal. It even sings—sort of.

How it works: Give it a genre, an artist, and lyrics, and Jukebox will produce a passable pastiche in the style of well-known performers, such as Katy Perry, Elvis Presley or Nas. You can also give it the first few seconds of a song and it will autocomplete the rest.

Rice University researchers have discovered a hidden symmetry in the chemical kinetic equations scientists have long used to model and study many of the chemical processes essential for life.

The find has implications for drug design, genetics and biomedical research and is described in a study published this month in the Proceedings of the National Academy of Sciences. To illustrate the biological ramifications, study co-authors Oleg Igoshin, Anatoly Kolomeisky and Joel Mallory of Rice’s Center for Theoretical Biological Physics (CTBP) used three wide-ranging examples: protein folding, enzyme catalysis and motor protein efficiency.

In each case, the researchers demonstrated that a simple mathematical ratio shows that the likelihood of errors is controlled by kinetics rather than thermodynamics.

The Newtonian laws of physics explain the behavior of objects in the everyday physical world, such as an apple falling from a tree. For hundreds of years Newton provided a complete answer until the work of Einstein introduced the concept of relativity. The discovery of relativity did not suddenly prove Newton wrong, relativistic corrections are only required at speeds above about 67 million mph. Instead, improving technology allowed both more detailed observations and techniques for analysis that then required explanation. While most of the consequences of a Newtonian model are intuitive, much of relativity is not and is only approachable though complex equations, modeling, and highly simplified examples.

In this issue, Korman et al.1 provide data from a model of the second gas effect on arterial partial pressures of volatile anesthetic agents. Most readers might wonder what this information adds, some will struggle to remember what the second gas effect is, and others will query the value of modeling rather than “real data.” This editorial attempts to address these questions.

The second gas effect2 is a consequence of the concentration effect3 where a “first gas” that is soluble in plasma, such as nitrous oxide, moves rapidly from the lungs to plasma. This increases the alveolar concentration and hence rate of uptake into plasma of the “second gas.” The second gas is typically a volatile anesthetic, but oxygen also behaves as a second gas.4 Although we frequently talk of inhalational kinetics as a single process, there are multiple steps between dialing up a concentration and the consequent change in effect. The key steps are transfer from the breathing circuit to alveolar gas, from the alveoli to plasma, and then from plasma to the “effect-site.” Separating the two steps between breathing circuit and plasma helps us understand both the second gas effect and the message underlying the paper by Korman et al.1

An exact solution of the Einstein—Maxwell equations yields a general relativistic picture of the tachyonic phenomenon, suggesting a hypothesis on the tachyon creation. The hypothesis says that the tachyon is produced when a neutral and very heavy (over 75 GeV/c^2) subatomic particle is placed in electric and magnetic fields that are perpendicular, very strong (over 6.9 x 1017 esu/cm^2 or oersted), and the squared ratio of their strength lies in the interval (1,5]. Such conditions can occur when nonpositive subatomic particles of high energy strike atomic nuclei other than the proton. The kinematical relations for the produced tachyon are given. Previous searches for tachyons in air showers and some possible causes of their negative results are discussed.

Can we study AI the same way we study lab rats? Researchers at DeepMind and Harvard University seem to think so. They built an AI-powered virtual rat that can carry out multiple complex tasks. Then, they used neuroscience techniques to understand how its artificial “brain” controls its movements.

Today’s most advanced AI is powered by artificial neural networks —machine learning algorithms made up of layers of interconnected components called “neurons” that are loosely inspired by the structure of the brain. While they operate in very different ways, a growing number of researchers believe drawing parallels between the two could both improve our understanding of neuroscience and make smarter AI.

Now the authors of a new paper due to be presented this week at the International Conference on Learning Representations have created a biologically accurate 3D model of a rat that can be controlled by a neural network in a simulated environment. They also showed that they could use neuroscience techniques for analyzing biological brain activity to understand how the neural net controlled the rat’s movements.

The main idea of artificial neural networks (ANN) is to build up representations for complicated functions using compositions of relatively simple functions called layers.

A deep neural network is one that has many layers, or many functions composed together.

Although layers are typically simple functions(e.g. relu(Wx + b)) in general they could be any differentiable functions.

Quantitative biologists David McCandlish and Juannan Zhou at Cold Spring Harbor Laboratory have developed an algorithm with predictive power, giving scientists the ability to see how specific genetic mutations can combine to make critical proteins change over the course of a species’ evolution.

Described in Nature Communications, the algorithm called “minimum epistasis interpolation” results in a visualization of how a protein could evolve to either become highly effective or not effective at all. They compared the functionality of thousands of versions of the protein, finding patterns in how mutations cause the protein to evolve from one functional form to another.

“Epistasis” describes any interaction between genetic mutations in which the effect of one gene is dependent upon the presence of another. In many cases, scientists assume that when reality does not align with their predictive models, these interactions between genes are at play. With this in mind, McCandlish created this new algorithm with the assumption that every mutation matters. The term “Interpolation” describes the act of predicting the evolutionary path of mutations a species might undergo to achieve optimal protein function.

Robots could soon assist humans in a variety of fields, including in manufacturing and industrial settings. A robotic system that can automatically assemble customized products may be particularly desirable for manufacturers, as it could significantly decrease the time and effort necessary to produce a variety of products.

To work most effectively, such a robot should integrate an assembly planner, a component that plans the sequence of movements and actions that a robot should perform to manufacture a specific product. Developing an assembly planner that can rapidly plan the sequences of movements necessary to produce different customized products, however, has so far proved to be highly challenging.

Researchers at the German Aerospace Center (DLR) have recently developed an algorithm that can transfer knowledge acquired by a robot while assembling products in the past to the assembly of new items. This algorithm, presented in a paper published in IEEE Robotics and Automation Letters, can ultimately reduce the amount of time required by an assembly planner to come up with action sequences for the manufacturing of new customized products.