Toggle light / dark theme

Researchers led by City College of New York physicist Pouyan Ghaemi report the development of a quantum algorithm with the potential to study a class of many-electron quantums system using quantum computers. Their paper, entitled “Creating and Manipulating a Laughlin-Type ν=1/3 Fractional Quantum Hall State on a Quantum Computer with Linear Depth Circuits,” appears in the December issue of PRX Quantum, a journal of the American Physical Society.

“Quantum physics is the fundamental theory of nature which leads to formation of molecules and the resulting matter around us,” said Ghaemi, assistant professor in CCNY’s Division of Science. “It is already known that when we have a macroscopic number of quantum particles, such as electrons in the metal, which interact with each other, novel phenomena such as superconductivity emerge.”

However, until now, according to Ghaemi, tools to study systems with large numbers of interacting quantum particles and their novel properties have been extremely limited.

A new study outlines ways colleges and universities can update their curricula to prepare the workforce for a new wave of quantum technology jobs. Three researchers, including Rochester Institute of Technology Associate Professor Ben Zwickl, suggested steps that need to be taken in a new paper in Physical Review Physics Education Research after interviewing managers at more than 20 quantum technology companies across the U.S.

The study’s authors from University of Colorado Boulder and RIT set out to better understand the types of entry-level positions that exist in these companies and the educational pathways that might lead into those jobs. They found that while the companies still seek employees with traditional STEM degrees, they want the candidates to have a grasp of fundamental concepts in quantum information science and technology.

“For a lot of those roles, there’s this idea of being ‘quantum aware’ that’s highly desirable,” said Zwickl, a member of RIT’s Future Photon Initiative and Center for Advancing STEM Teaching, Learning and Evaluation. “The companies told us that many positions don’t need to have deep expertise, but students could really benefit from a one- or two-semester introductory sequence that teaches the foundational concepts, some of the hardware implementations, how the algorithms work, what a qubit is, and things like that. Then a graduate can bring in all the strength of a traditional STEM degree but can speak the language that the is talking about.”

Learned optimizers are algorithms that can be trained to solve optimization problems. Although learned optimizers can outperform baseline optimizers in restricted settings, the ML research community understands remarkably little about their inner workings or why they work as well as they do. In a paper currently under review for ICLR 2021, a Google Brain research team attempts to shed some light on the matter.

The researchers explain that optimization algorithms can be considered the basis of modern machine learning. A popular research area in recent years has focused on learning optimization algorithms by directly parameterizing and training an optimizer on a distribution of tasks.

Research on learned optimizers aims to replace the baseline “hand-designed” optimizers with a parametric optimizer trained on a set of tasks, which can then be applied more generally. In contrast to baseline optimizers that use simple update rules derived from theoretical principles, learned optimizers use flexible, high-dimensional, nonlinear parameterizations.

A large team of researchers affiliated with a host of institutions in Italy, the U.K and Hungary has carried out the most precise measurements yet of deuterium fusing with a proton to form helium-3. In their paper published in the journal Nature, the group describes their effort and how they believe it will contribute to better understanding the events that transpired during the first few minutes after the Big Bang.

Astrophysics theory suggests that the creation of deuterium was one of the first things that happened after the Big Bang. Therefore, it plays an important role in Big Bang nucleosynthesis—the reactions that happened afterward that led to the production of several of the light elements. Theorists have developed equations that show the likely series of events that occurred, but to date, it has been difficult to prove them correct without physical evidence. In this new effort, the researchers working at the Laboratory for Underground Nuclear Astrophysics in Italy have carried out experiments to simulate those first few minutes, hoping to confirm the theories.

The work was conducted deep under the thick rock cover of the Gran Sasso mountain to prevent interference from —it involved firing a beam of protons at a deuterium target—deuterium being a form of hydrogen with just one and one neutron—and then measuring the rate of fusion. But because the rate of fusion is so low, the bombardment had to be carried out many times—the team carried out their work nearly every weekend for three years.

Scientists have long sought a system for predicting the properties of materials based on their chemical composition. In particular, they set sights on the concept of a chemical space that places materials in a reference frame such that neighboring chemical elements and compounds plotted along its axes have similar properties. This idea was first proposed in 1984 by the British physicist, David G. Pettifor, who assigned a Mendeleev number (MN) to each element. Yet the meaning and origin of MNs were unclear. Scientists from the Skolkovo Institute of Science and Technology (Skoltech) puzzled out the physical meaning of the mysterious MNs and suggested calculating them based on the fundamental properties of atoms. They showed that both MNs and the chemical space built around them were more effective than empirical solutions proposed until then. Their research supported by a grant from the Russian Science Foundation’s (RSF) World-class Lab Research Presidential Program was presented in The Journal of Physical Chemistry C.

Systematizing the enormous variety of chemical , both known and hypothetical, and pinpointing those with a particularly interesting property is a tall order. Measuring the properties of all imaginable compounds in experiments or calculating them theoretically is downright impossible, which suggests that the search should be narrowed down to a smaller space.

David G. Pettifor put forward the idea of chemical space in the attempt to somehow organize the knowledge about material properties. The chemical space is basically a where elements are plotted along the axes in a certain sequence such that the neighboring elements, for instance, Na and K, have similar properties. The points within the space represent compounds, so that the neighbors, for example, NaCl and KCl, have similar properties, too. In this setting, one area is occupied by superhard materials and another by ultrasoft ones. Having the space at hand, one could create an algorithm for finding the best material among all possible compounds of all elements. To build their “smart” map, Skoltech scientists, Artem R. Oganov and Zahed Allahyari, came up with their own universal approach that boasts the highest predictive power as compared to the best-known methods.

DARPA recently awarded contracts to five companies to develop algorithms enabling mixed teams of manned and unmanned combat aircraft to conduct aerial dogfighting autonomously.

Boeing, EpiSci, Georgia Tech Research Institute, Heron Systems, and physicsAI were chosen to develop air combat maneuvering algorithms for individual and team tactical behaviors under Technical Area (TA) 1 of DARPA’s Air Combat Evolution (ACE) program. Each team is tasked with developing artificial intelligence agents that expand one-on-one engagements to two-on-one and two-on-two within-visual-range aerial battles. The companies’ algorithms will be tested in each of three program phases: modeling and simulation, sub-scale unmanned aircraft, and full-scale combat representative aircraft scheduled in 2023.

“The TA1 performers include a large defense contractor, a university research institute, and boutique AI firms, who will build upon the first-gen autonomous dogfighting algorithms demonstrated in the AlphaDogfight Trials this past August,” said Air Force Col. Dan “Animal” Javorsek, program manager in DARPA’s Strategic Technology Office. “We will be evaluating how well each performer is able to advance their algorithms to handle individual and team tactical aircraft behaviors, in addition to how well they are able to scale the capability from a local within-visual-range environment to the broader, more complex battlespace.”

DARPA’s SIGMA+ program conducted a week-long deployment of advanced chemical and biological sensing systems in the Indianapolis metro region in August, collecting more than 250 hours of daily life background atmospheric data across five neighborhoods that helped train algorithms to more accurately detect chemical and biological threats. The testing marked the first time in the program the advanced laboratory grade instruments for chemical and biological sensing were successfully deployed as mobile sensors, increasing their versatility on the SIGMA+ network.

“Spending a week gathering real-world background data from a major Midwestern metropolitan region was extremely valuable as we further develop our SIGMA+ sensors and networks to provide city and regional-scale coverage for chem and bio threat detection,” said Mark Wrobel, program manager in DARPA’s Defense Sciences Office. “Collecting chemical and biological environment data provided an enhanced understanding of the urban environment and is helping us make refinements of the threat-detection algorithms to minimize false positives and false negatives.”

SIGMA+ expands on the original SIGMA program’s advanced capability to detect illicit radioactive and nuclear materials by developing new sensors and networks that would alert authorities with high sensitivity to chemical, biological, and explosives threats as well. SIGMA, which began in 2014, has demonstrated city-scale capability for detecting radiological threats and is now operationally deployed with the Port Authority of New York and New Jersey, helping protect the greater New York City region.

A team of researchers at Samsung has developed a slim-panel holographic video display that allows for viewing from a variety of angles. In their paper published in the journal Nature Communications, the group describes their new display device and their plans for making it suitable for use with a smartphone.

Despite predictions in science-fiction books and movies over the past several decades, 3D holographic players are still not available to consumers. Existing players are too bulky and display video from limited viewing angles. In this new effort, the researchers at Samsung claim to have overcome these difficulties and built a demo device to prove it.

To build their demo device, which was approximately 25 cm tall, the team at Samsung added a steering-backlight unit with a beam deflector for increasing viewing angles. The demo had a viewing angle of 15 degrees at distances up to one meter. The beam deflector was made by sandwiching liquid crystals between sheets of glass. The end result was a device that could bend the light that came through it very much like a prism. Testing showed the beam deflector combined with a tilting mechanism increased viewing angles by 30 times compared to conventional designs. The new design also allows for a slim form at just 1 cm thick. It also has a light modulator, geometric lens and a holographic video processor capable of carrying out 140 billion operations per second. The researchers used a new algorithm that uses lookup tables rather than math operations to process the video data. The demo device was capable of displaying 4K resolution holographic video running at 30 frames per second.

Borrowing a page from high-energy physics and astronomy textbooks, a team of physicists and computer scientists at the U.S. Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab) has successfully adapted and applied a common error-reduction technique to the field of quantum computing.

In the world of subatomic particles and giant particle detectors, and distant galaxies and giant telescopes, scientists have learned to live, and to work, with uncertainty. They are often trying to tease out ultra-rare particle interactions from a massive tangle of other particle interactions and background “noise” that can complicate their hunt, or trying to filter out the effects of atmospheric distortions and interstellar dust to improve the resolution of astronomical imaging.

Also, inherent problems with detectors, such as with their ability to record all particle interactions or to exactly measure particles’ energies, can result in data getting misread by the electronics they are connected to, so scientists need to design complex filters, in the form of computer algorithms, to reduce the margin of error and return the most accurate results.