Toggle light / dark theme

Two and a half months since Erik Verlinde submitted his entropic gravity paper, and all of physics and cosmology has turned into entropy. Well, I am exaggerating a bit, and perhaps more than just a bit. Yet, fact is that within two weeks of Erik’s publication a steady stream of ‘entropic everything’ papers has developed at a rate of close to one paper per day. Gravity, Einstein’s equations, cosmic expansion, dark energy, primordial inflation, dark mass: it’s all entropic. Chaos rules. Entropy is king!

Or is it?

Could it be that an ‘entropic bandwagon’ has started rolling? Is this all not just a fad appealing to scientist tired of string theory? What is this elusive entropic force anyway? Do these folks really believe bits of information attract each other?

Microsoft and Warner Bros. have collaborated to successfully store and retrieve the entire 1978 iconic “Superman” movie on a piece of glass roughly the size of a drink coaster, 75 by 75 by 2 millimeters thick.

It was the first proof of concept test for Project Silica, a Microsoft Research project that uses recent discoveries in ultrafast laser optics and artificial intelligence to store data in quartz glass. A laser encodes data in glass by creating layers of three-dimensional nanoscale gratings and deformations at various depths and angles. Machine learning algorithms read the data back by decoding images and patterns that are created as polarized light shines through the glass.

The hard silica glass can withstand being boiled in hot water, baked in an oven, microwaved, flooded, scoured, demagnetized and other environmental threats that can destroy priceless historic archives or cultural treasures if things go wrong.

A team of researchers affiliated with several institutions in France has revisited the idea of improving on estimates of the upper limit of the mass of a graviton. In their paper published in the journal Physical Review Letters, the group describes their accurate measurement of the parameters of planetary bodies and what they found.

Einstein’s suggests that the gravity of large masses that warps spacetime comes from a theoretical massless particle called the graviton. Scientists have been trying for many years to either prove the theory correct or disprove it by finding a way to show that it has . One approach to such a proof involves studying the speed of the expansion of the universe—this approach has suggested that if the graviton does have a mass, its upper limit would be approximately 10 −32 electron-volts. Unfortunately, this result is based on a lot of assumptions, many of which are still controversial. Another way to do it is by studying planetary orbital deviations that could only come from a nonzero graviton mass—and starting with the assumption that if a graviton has zero mass, then like the photon, it should travel at the speed of light. In this new effort, the researchers have found a way to improve the accuracy of this approach.

The work involved temporarily freezing the motion of the stars and planets at different points in time—the first was the year 2000. The researchers found the masses, positions and speed of the sun, the planets and several asteroids for that year. They then ran equations that allowed them to roll forward in time to 2017 and back to 1913 and forward again as needed. These time periods were chosen because the team was able to find usable data for them. In running the calculations, the researchers found that they were able to come up with an estimation for the upper limit of the graviton of 6.76 × 10 −23—with a probability of 90 percent. The researchers note that their number was very close to that found by a team using data from the LIGO interferometers, but suggest that any similarities were purely coincidence.

A Portland teen won second place in a national technology contest, taking home $2,500 that he can use to attend science camp next summer.

Rishab Jain, 14, is a freshman at Westview High School. His winning project, which he calls the Pancreas Detective, is an artificial intelligence tool that can help diagnose pancreatic cancer through gene sequencing. The algorithm helps doctors focus on the organ during examinations, which is often obscured because it moves around the abdominal area as patients breathe and other bodily functions shift other organs as well.

Last year, the same project netted $25,000 from 3M when he attended Stoller Middle School. He used that money to fund his nonprofit, Samyak Science Society, which promotes science, technology, engineering and math education for other children, Time Magazine reported.

Researchers at Nanyang Technological University, Singapore (NTU Singapore) have developed a quantum communication chip that is 1,000 times smaller than current quantum setups, but offers the same superior security quantum technology is known for.

Most leading security standards used in secure communication methods—from withdrawing cash from the ATM to purchasing goods online on the smartphone—does not leverage quantum technology. The electronic transmission of the personal identification number (PIN) or password can be intercepted, posing a .

Roughly three millimeters in size, the tiny chip uses quantum communication algorithms to provide enhanced security compared to existing standards. It does this by integrating passwords within the information that is being delivered, forming a secure quantum key. After the information is received, it is destroyed along with the key, making it an extremely secure form of communication.

A new way to calculate the interaction between a metal and its alloying material could speed the hunt for a new material that combines the hardness of ceramic with the resilience of metal.

The discovery, made by engineers at the University of Michigan, identifies two aspects of this interaction that can accurately predict how a particular alloy will behave—and with fewer demanding, from-scratch quantum mechanical calculations.

“Our findings may enable the use of machine learning algorithms for alloy design, potentially accelerating the search for better alloys that could be used in turbine engines and nuclear reactors,” said Liang Qi, assistant professor of materials science and engineering who led the research.

Metasurfaces are optically thin metamaterials that can control the wavefront of light completely, although they are primarily used to control the phase of light. In a new report, Adam C. Overvig and colleagues in the departments of Applied Physics and Applied Mathematics at the Columbia University and the Center for Functional Nanomaterials at the Brookhaven National Laboratory in New York, U.S., presented a novel study approach, now published on Light: Science & Applications. The simple concept used meta-atoms with a varying degree of form birefringence and angles of rotation to create high-efficiency dielectric metasurfaces with ability to control optical amplitude (maximum extent of a vibration) and phase at one or two frequencies. The work opened applications in computer-generated holography to faithfully reproduce the phase and amplitude of a target holographic scene without using iterative algorithms that are typically required during phase-only holography.

The team demonstrated all-dielectric holograms with independent and complete control of the amplitude and phase. They used two simultaneous optical frequencies to generate two-dimensional (2-D) and 3D holograms in the study. The phase-amplitude metasurfaces allowed additional features that could not be attained with phase-only holography. The features included artifact-free 2-D holograms, the ability to encode separate phase and amplitude profiles at the object plane and encode intensity profiles at the metasurface and object planes separately. Using the method, the scientists also controlled the surface textures of 3D holographic objects.

Light waves possess four key properties including amplitude, phase, polarization and optical impedance. Materials scientists use metamaterials or “metasurfaces” to tune these properties at specific frequencies with subwavelength, spatial resolution. Researchers can also engineer individual structures or “meta-atoms” to facilitate a variety of optical functionalities. Device functionality is presently limited by the ability to control and integrate all four properties of light independently in the lab. Setbacks include challenges of developing individual meta-atoms with varying responses at a desired frequency with a single fabrication protocol. Research studies previously used metallic scatterers due to their strong light-matter interactions to eliminate inherent optical losses relative to metals while using lossless dielectric platforms for high-efficiency phase control—the single most important property for wavefront control.

Researchers at the Kavli Institute for the Physics and Mathematics of the Universe (WPI) and Tohoku University in Japan have recently identified an anomaly in the electromagnetic duality of Maxwell Theory. This anomaly, outlined in a paper published in Physical Review Letters, could play an important role in the consistency of string theory.

The recent study is a collaboration between Yuji Tachikawa and Kazuya Yonekura, two string theorists, and Chang-Tse Hsieh, a condensed matter theorist. Although the study started off as an investigation into string theory, it also has implications for other areas of physics.

In current physics theory, classical electromagnetism is described by Maxwell’s equations, which were first introduced by physicist James Clerk Maxwell around 1865. Objects governed by these equations include electric and magnetic fields, electrically charged particles (e.g., electrons and protons), and magnetic monopoles (i.e. hypothetical particles carrying single magnetic poles).

Physicists can explore tailored physical systems to rapidly solve challenging computational tasks by developing spin simulators, combinatorial optimization and focusing light through scattering media. In a new report on Science Advances, C. Tradonsky and a group of researchers in the Departments of Physics in Israel and India addressed the phase retrieval problem by reconstructing an object from its scattered intensity distribution. The experimental process addressed an existing problem in disciplines ranging from X-ray imaging to astrophysics that lack techniques to reconstruct an object of interest, where scientists typically use indirect iterative algorithms that are inherently slow.

In the new optical approach, Tradonsky et al conversely used a digital degenerate cavity laser (DDCL) mode to rapidly and efficiently reconstruct the object of interest. The experimental results suggested that the gain competition between the many lasing modes acted as a highly parallel computer to rapidly dissolve the phase retrieval problem. The approach applies to two-dimensional (2-D) objects with known compact support and complex-valued objects, to generalize imaging through scattering media, while accomplishing other challenging computational tasks.

To calculate the intensity distribution of light scattered far from an unknown object relatively easily, researchers can compute the source of the absolute value of an object’s Fourier transform. The reconstruction of an object from its scattered intensity distribution is, however, ill-posed, since phase information can be lost and diverse phase distributions in the work can result in different reconstructions. Scientists must therefore obtain prior information about an object’s shape, positivity, spatial symmetry or sparsity for more precise object reconstructions. Such examples are found in astronomy, short-pulse characterization studies, X-ray diffraction, radar detection, speech recognition and when imaging across turbid media. During the reconstruction of objects with a finite extent (compact support), researchers offer a unique solution to the phase retrieval problem, as long as they model the same scattered intensity at a sufficiently higher resolution.