Toggle light / dark theme

If travel to distant stars within an individual’s lifetime is going to be possible, a means of faster-than-light propulsion will have to be found. To date, even recent research about superluminal (faster-than-light) transport based on Einstein’s theory of general relativity would require vast amounts of hypothetical particles and states of matter that have “exotic” physical properties such as negative energy density. This type of matter either cannot currently be found or cannot be manufactured in viable quantities. In contrast, new research carried out at the University of Göttingen gets around this problem by constructing a new class of hyper-fast ‘solitons’ using sources with only positive energies that can enable travel at any speed. This reignites debate about the possibility of faster-than-light travel based on conventional physics. The research is published in the journal Classical and Quantum Gravity.

The author of the paper, Dr Erik Lentz, analysed existing research and discovered gaps in previous ‘warp drive’ studies. Lentz noticed that there existed yet-to-be explored configurations of space-time curvature organized into ‘solitons’ that have the potential to solve the puzzle while being physically viable. A soliton — in this context also informally referred to as a ‘warp bubble’ — is a compact wave that maintains its shape and moves at constant velocity. Lentz derived the Einstein equations for unexplored soliton configurations (where the space-time metric’s shift vector components obey a hyperbolic relation), finding that the altered space-time geometries could be formed in a way that worked even with conventional energy sources. In essence, the new method uses the very structure of space and time arranged in a soliton to provide a solution to faster-than-light travel, which — unlike other research — would only need sources with positive energy densities.

Earlier this year, in June 2,021 the British Ministry of Defence employed Rafael’s DRONE DOME counter-UAV system to protect world leaders during the G7 Summit in Cornwall, England from unmanned aerial threats. Three years ago, Britain’s Defence Ministry purchased several DRONE DOME systems which it has successfully employed in a multitude of operational scenarios, including for protecting both the physical site and participants of this year’s G7 summit. Rafael’s DRONE DOME is an innovative end-to-end, combat-proven counter-Unmanned Aerial System (C-UAS), providing all-weather, 360-degree rapid defence against hostile drones. Fully operational and globally deployed, DRONE DOME offers a modular, robust infrastructure comprised of electronic jammers and sensors and unique artificial intelligence algorithms to effectively secure threatened air space.

Meir Ben Shaya, Rafael EVP for Marketing and Business Development of Air Defence Systems: Rafael today recognizes two new and key trends in the field of counter-UAVs, both of which DRONE DOME can successfully defend against. The first trend is the number of drones employed during an attack, and the operational need to have the ability counter multiple, simultaneous attacks; this is a significant, practical challenge that any successful system must be able to overcome. The second trend is the type of tool being employed. Previously, air defense systems were developed to seek out conventional aircraft, large unmanned aerial vehicles, and missile, but today these defense systems must also tackle smaller, slower, low-flying threats which are becoming more and more autonomous.

“You may hit the tipping point when you’re 50; it may happen when you’re 80; it may never happen,” Schindler said. “But once you pass the tipping point, you’re going to accumulate high levels of amyloid that are likely to cause dementia. If we know how much amyloid someone has right now, we can calculate how long ago they hit the tipping point and estimate how much longer it will be until they are likely to develop symptoms.”


Summary: A new algorithm uses neuroimaging data of amyloid levels in the brain and takes into account a person’s age to determine when a person with genetic Alzheimer’s risk factors, and with no signs of cognitive decline, will develop the disease.

Source; WUSTL

Researchers at Washington University School of Medicine in St. Louis have developed an approach to estimating when a person who is likely to develop Alzheimer’s disease, but has no cognitive symptoms, will start showing signs of Alzheimer’s dementia.

The algorithm, available online in the journal Neurology, uses data from a kind of brain scan known as amyloid positron emission tomography (PET) to gauge brain levels of the key Alzheimer’s protein amyloid beta.

Dementia has many faces, and because of the wide range of ways in which it can develop and affect patients, it can be very challenging to treat. Now, however, using supercomputer analysis of big data, researchers from Japan were able to predict that a single protein is a key factor in the damage caused by two very common forms of dementia.

In a study published this month in Communications Biology, researchers from Tokyo Medical and Dental University (TMDU) have revealed that the HMGB1 is a key player in both frontotemporal lobar and Alzheimer , two of the most common causes of dementia.

Frontotemporal lobar degeneration can be caused by mutation of a variety of genes, which means that no one treatment will be right for all patients. However, there are some similarities between frontotemporal lobar degeneration and Alzheimer disease, which led the researchers at Tokyo Medical and Dental University (TMDU) to explore whether these two conditions cause damage to the brain in the same way.

Every piece of data that travels over the internet — from paragraphs in an email to 3D graphics in a virtual reality environment — can be altered by the noise it encounters along the way, such as electromagnetic interference from a microwave or Bluetooth device. The data are coded so that when they arrive at their destination, a decoding algorithm can undo the negative effects of that noise and retrieve the original data.

Since the 1950s, most error-correcting codes and decoding algorithms have been designed together. Each code had a structure that corresponded with a particular, highly complex decoding algorithm, which often required the use of dedicated hardware.

Researchers at MIT.

A new study has found that a material(nickel oxide, a quantum material) can mimic the sea slug’s most essential intelligence features. The discovery is a step toward building hardware that could help make AI more efficient and reliable.


For artificial intelligence to get any smarter, it needs first to be as intelligent as one of the simplest creatures in the animal kingdom: the sea slug.

A new study has found that a material can mimic the sea slug’s most essential intelligence features. The discovery is a step toward building hardware that could help make AI more efficient and reliable for technology ranging from self-driving cars and surgical robots to social media algorithms.

The study, publishing this week in the Proceedings of the National Academy of Sciences, was conducted by a team of researchers from Purdue University, Rutgers University, the University of Georgia and Argonne National Laboratory.

“The dream of predicting a protein shape just from its gene sequence is now a reality,” said Paul Adams, Associate Laboratory Director for Biosciences at Berkeley Lab. For Adams and other structural biologists who study proteins, predicting their shape offers a key to understanding their function and accelerating treatments for diseases like cancer and COVID-19.

The current approaches to accurately mapping that shape, however, usually rely on complex experiments at synchrotrons. But even these sophisticated processes have their limitations—the data and quality aren’t always sufficient to understand a protein at an atomic level. By applying powerful machine learning methods to the large library of protein structures it is now possible to predict a protein’s shape from its gene sequence.

Researchers in Berkeley Lab’s Molecular Biophysics & Integrated Bioimaging Division joined an led by the University of Washington to produce a computer software tool called RoseTTAFold. The algorithm simultaneously takes into account patterns, distances, and coordinates of amino acids. As these data inputs flow in, the tool assesses relationships within and between structures, eventually helping to build a very detailed picture of a protein’s .

Summary: Machine learning algorithm produced fewer decision-making errors than professionals when it came to clinical diagnosis of patients.

Source: University of Montreal.

It’s an old adage: there’s no harm in getting a second opinion. But what if that second opinion could be generated by a computer, using artificial intelligence? Would it come up with better treatment recommendations than your professional proposes?

A new type of artificial intelligence (AI) algorithm, developed by the Mayo Clinic and the Google Research Brain Team, can potentially pave the way toward more directed brain stimulation for the treatment of Parkinson’s disease and other movement-related disorders.

According to researchers, this algorithm can more accurately determine the interaction between different regions of the brain — data that will be key for improving the way brain stimulation devices are used in the real world for treating Parkinson’s.

“Our findings show that this new type of algorithm may help us understand which brain regions directly interact with one another, which in turn may help guide placement of electrodes for stimulating devices to treat network brain diseases,” Kai Miller, MD, PhD, a neurosurgeon at Mayo Clinic and the first author of the study, said in a press release.