Toggle light / dark theme

What does quark-gluon plasma—the hot soup of elementary particles formed a few microseconds after the Big Bang—have in common with tap water? Scientists say it’s the way it flows.

A new study, published today in the journal SciPost Physics, has highlighted the surprising similarities between , the first matter thought to have filled the early Universe, and water that comes from our tap.

The ratio between the viscosity of a , the measure of how runny it is, and its density, decides how it flows. Whilst both the viscosity and density of are about 16 orders of magnitude larger than in water, the researchers found that the ratio between the viscosity and density of the two types of fluids are the same. This suggests that one of the most exotic states of matter known to exist in our universe would flow out of your tap in much the same way as water.

A new map of dark matter in the local universe reveals several previously undiscovered filamentary structures connecting galaxies. The map, developed using machine learning by an international team including a Penn State astrophysicist, could enable studies about the nature of dark matter as well as about the history and future of our local universe.

Dark matter is an elusive substance that makes up 80% of the universe. It also provides the skeleton for what cosmologists call the cosmic web, the large-scale structure of the universe that, due to its gravitational influence, dictates the motion of galaxies and other cosmic material. However, the distribution of local dark matter is currently unknown because it cannot be measured directly. Researchers must instead infer its distribution based on its gravitational influence on other objects in the universe, like galaxies.

“Ironically, it’s easier to study the distribution of dark matter much further away because it reflects the very distant past, which is much less complex,” said Donghui Jeong, associate professor of astronomy and astrophysics at Penn State and a corresponding author of the study. “Over time, as the large-scale structure of the universe has grown, the complexity of the universe has increased, so it is inherently harder to make measurements about dark matter locally.”

Astronomers have discovered an exceedingly old star at the edge of our galaxy that seems to have formed only a few million years after the Big Bang – and what they are learning from it could affect their understanding of the birth of the universe.

In a study published last week, researchers found the star during an astronomical survey of the southern sky with a technique called narrowband photometry, which measures the brightness of distant stars in different wavelengths of light and can reveal stars that have low levels of heavy elements.

They then studied the star – known by its survey number as SPLUS J210428.01−004934.2, or SPLUS J2104−0049 for short – with high-resolution spectroscopy to determine its chemical makeup.

Regardless of size, all black holes experience similar accretion cycles, a new study finds.

On September 9, 2018, astronomers spotted a flash from a galaxy 860 million light years away. The source was a supermassive black hole about 50 million times the mass of the sun. Normally quiet, the gravitational giant suddenly awoke to devour a passing star in a rare instance known as a tidal disruption event. As the stellar debris fell toward the black hole, it released an enormous amount of energy in the form of light.

Researchers at MIT, the European Southern Observatory, and elsewhere used multiple telescopes to keep watch on the event, labeled AT2018fyk. To their surprise, they observed that as the supermassive black hole consumed the star, it exhibited properties that were similar to that of much smaller, stellar-mass black holes.

An international research team analyzed a database of more than 1000 supernova explosions and found that models for the expansion of the Universe best match the data when a new time dependent variation is introduced. If proven correct with future, higher-quality data from the Subaru Telescope and other observatories, these results could indicate still unknown physics working on the cosmic scale.

Edwin Hubble’s observations over 90 years ago showing the expansion of the Universe remain a cornerstone of modern astrophysics. But when you get into the details of calculating how fast the Universe was expanding at different times in its history, scientists have difficulty getting theoretical models to match observations.

To solve this problem, a team led by Maria Dainotti (Assistant Professor at the National Astronomical Observatory of Japan and the Graduate University for Advanced Studies, SOKENDAI in Japan and an affiliated scientist at the Space Science Institute in the U.S.A.) analyzed a catalog of 1048 supernovae which exploded at different times in the history of the Universe. The team found that the theoretical models can be made to match the observations if one of the constants used in the equations, appropriately called the Hubble constant, is allowed to vary with time.

At the time the proposed planet signal is strongest, stellar activity on the surface of the star was Also strong, says Lubin. Thus, he notes, the signal associated with the planet can be explained by activity emanating from stellar activity instead of from the telltale periodic tug on Barnard’s Star from a putative super-earth.

As I noted here previously, Barnard’s Star, which lies only 6 light years away in Ophiuchus, has long fascinated astronomers both due to its proximity to Earth and the fact that it has the largest apparent motion across our line of sight as any known stellar object. In the 105 years since its discovery by astronomer E.E. Barnard, it is the nearest star to our own Sun in the Northern Celestial Hemisphere, the authors note.

One of the more infamous claims of planets around barnard’s star came in in 1963, when Swarthmore College astronomer Peter van de Kamp announced that he had detected a planet using Swarthmore’s 24-inch refractor at Sproul Observatory. Van de Kamp later updated his findings three more times, proposing a second planet in the system with periods of 12 and 20 years, respectively, the authors note.

New observations and simulations show that jets of high-energy particles emitted from the central massive black hole in the brightest galaxy in galaxy clusters can be used to map the structure of invisible inter-cluster magnetic fields. These findings provide astronomers with a new tool for investigating previously unexplored aspects of clusters of galaxies.

As clusters of galaxies grow through collisions with surrounding matter, they create bow shocks and wakes in their dilute plasma. The plasma motion induced by these activities can drape intra-cluster magnetic layers, forming virtual walls of magnetic force. These magnetic layers, however, can only be observed indirectly when something interacts with them. Because it is simply difficult to identify such interactions, the nature of intra-cluster magnetic fields remains poorly understood. A new approach to map/characterize magnetic layers is highly desired.

Cosmologists love universe simulations. Even models covering hundreds of millions of light years can be useful for understanding fundamental aspects of cosmology and the early universe. There’s just one problem – they’re extremely computationally intensive. A 500 million light year swath of the universe could take more than 3 weeks to simulate… Now, scientists led by Yin Li at the Flatiron Institute have developed a way to run these cosmically huge models 1000 times faster. That 500 million year light year swath could then be simulated in 36 minutes.

Older algorithms took such a long time in part because of a tradeoff. Existing models could either simulate a very detailed, very small slice of the cosmos or a vaguely detailed larger slice of it. They could provide either high resolution or a large area to study, not both.

To overcome this dichotomy, Dr. Li turned to an AI technique called a generative adversarial network (GAN). This algorithm pits two competing algorithms again each other, and then iterates on those algorithms with slight changes to them and judges whether those incremental changes improved the algorithm or not. Eventually, with enough iterations, both algorithms become much more accurate naturally on their own.