Toggle light / dark theme

The universe is kind of an impossible object. It has an inside but no outside; it’s a one-sided coin. This Möbius architecture presents a unique challenge for cosmologists, who find themselves in the awkward position of being stuck inside the very system they’re trying to comprehend.

It’s a situation that Lee Smolin has been thinking about for most of his career. A physicist at the Perimeter Institute for Theoretical Physics in Waterloo, Canada, Smolin works at the knotty intersection of quantum mechanics, relativity and cosmology. Don’t let his soft voice and quiet demeanor fool you — he’s known as a rebellious thinker and has always followed his own path. In the 1960s Smolin dropped out of high school, played in a rock band called Ideoplastos, and published an underground newspaper. Wanting to build geodesic domes like R. Buckminster Fuller, Smolin taught himself advanced mathematics — the same kind of math, it turned out, that you need to play with Einstein’s equations of general relativity. The moment he realized this was the moment he became a physicist. He studied at Harvard University and took a position at the Institute for Advanced Study in Princeton, New Jersey, eventually becoming a founding faculty member at the Perimeter Institute.

“Perimeter,” in fact, is the perfect word to describe Smolin’s place near the boundary of mainstream physics. When most physicists dived headfirst into string theory, Smolin played a key role in working out the competing theory of loop quantum gravity. When most physicists said that the laws of physics are immutable, he said they evolve according to a kind of cosmic Darwinism. When most physicists said that time is an illusion, Smolin insisted that it’s real.

A new paper from researchers at the University of Chicago introduces a technique for compiling highly optimized quantum instructions that can be executed on near-term hardware. This technique is particularly well suited to a new class of variational quantum algorithms, which are promising candidates for demonstrating useful quantum speedups. The new work was enabled by uniting ideas across the stack, spanning quantum algorithms, machine learning, compilers, and device physics. The interdisciplinary research was carried out by members of the EPiQC (Enabling Practical-scale Quantum Computation) collaboration, an NSF Expedition in Computing.

Adapting to a New Paradigm for Quantum Algorithms

The original vision for dates to the early 1980s, when physicist Richard Feynman proposed performing molecular simulations using just thousands of noise-less qubits (quantum bits), a practically impossible task for traditional computers. Other algorithms developed in the 1990s and 2000s demonstrated that thousands of noise-less qubits would also offer dramatic speedups for problems such as database search, integer factoring, and matrix algebra. However, despite recent advances in quantum hardware, these algorithms are still decades away from scalable realizations, because current hardware features noisy qubits.

Something called the fast Fourier transform is running on your cell phone right now. The FFT, as it is known, is a signal-processing algorithm that you use more than you realize. It is, according to the title of one research paper, “an algorithm the whole family can use.”

Alexander Stoytchev – an associate professor of electrical and computer engineering at Iowa State University who’s also affiliated with the university’s Virtual Reality Applications Center, its Human Computer Interaction graduate program and the department of computer science – says the FFT algorithm and its inverse (known as the IFFT) are at the heart of signal processing.

And, as such, “These are algorithms that made the digital revolution possible,” he said.

Something called the fast Fourier transform is running on your cell phone right now. The FFT, as it is known, is a signal-processing algorithm that you use more than you realize. It is, according to the title of one research paper, “an algorithm the whole family can use.”

Alexander Stoytchev—an associate professor of electrical and computer engineering at Iowa State University who’s also affiliated with the university’s Virtual Reality Applications Center, its Human Computer Interaction graduate program and the department of computer science—says the FFT and its inverse (known as the IFFT) are at the heart of signal processing.

And, as such, “These are algorithms that made the digital revolution possible,” he said.

Sensitive synthetic skin enables robots to sense their own bodies and surroundings—a crucial capability if they are to be in close contact with people. Inspired by human skin, a team at the Technical University of Munich (TUM) has developed a system combining artificial skin with control algorithms and used it to create the first autonomous humanoid robot with full-body artificial skin.

The developed by Prof. Gordon Cheng and his team consists of hexagonal about the size of a two-euro coin (i.e. about one inch in diameter). Each is equipped with a microprocessor and sensors to detect contact, acceleration, proximity and temperature. Such artificial enables robots to perceive their surroundings in much greater detail and with more sensitivity. This not only helps them to move safely. It also makes them safer when operating near people and gives them the ability to anticipate and actively avoid accidents.

The themselves were developed around 10 years ago by Gordon Cheng, Professor of Cognitive Systems at TUM. But this invention only revealed its full potential when integrated into a sophisticated system as described in the latest issue of the journal Proceedings of the IEEE.

https://www.youtube.com/watch?v=b07Pci_-eVY&t=1s

Two University of Hawaiʻi at Mānoa researchers have identified and corrected a subtle error that was made when applying Einstein’s equations to model the growth of the universe.

Physicists usually assume that a cosmologically large system, such as the universe, is insensitive to details of the small systems contained within it. Kevin Croker, a postdoctoral research fellow in the Department of Physics and Astronomy, and Joel Weiner, a faculty member in the Department of Mathematics, have shown that this assumption can fail for the compact objects that remain after the collapse and explosion of very large stars.

“For 80 years, we’ve generally operated under the assumption that the universe, in broad strokes, was not affected by the particular details of any small region,” said Croker. “It is now clear that general relativity can observably connect collapsed stars—regions the size of Honolulu—to the behavior of the universe as a whole, over a thousand billion billion times larger.”

For the “big data” revolution to continue, we need to radically rethink our hard drives. Thanks to evolution, we already have a clue.

Our bodies are jam-packed with data, tightly compacted inside microscopic structures within every cell. Take DNA: with just four letters we’re able to generate every single molecular process that keeps us running. That sort of combinatorial complexity is still unheard of in silicon-based data storage in computer chips.

Add this to the fact that DNA can be dehydrated and kept intact for eons—500,000 years and counting—and it’s no surprise that scientists have been exploiting its properties to encode information. To famed synthetic biologist Dr. George Church, looking to biology is a no-brainer: even the simple bacteria E. Coli has a data storage density of 1019 bits per cubic centimeter. Translation? Just a single cube of DNA measuring one meter each side can meet all of the world’s current data storage needs.

Biomedical engineers at Duke University have devised a machine learning approach to modeling the interactions between complex variables in engineered bacteria that would otherwise be too cumbersome to predict. Their algorithms are generalizable to many kinds of biological systems.

In the new study, the researchers trained a neural network to predict the circular patterns that would be created by a biological circuit embedded into a bacterial culture. The system worked 30,000 times faster than the existing computational .

To further improve accuracy, the team devised a method for retraining the machine learning model multiple times to compare their answers. Then they used it to solve a second biological system that is computationally demanding in a different way, showing the algorithm can work for disparate challenges.