Toggle light / dark theme

Researchers have discovered a new Earth-sized planet orbiting a star outside our solar system. The planet, called Kepler-1649c, is only around 1.06 times larger than Earth, making it very similar to our own planet in terms of physical dimensions. It’s also quite close to its star, orbiting at a distance that means it gets around 75% of the light we do from the Sun.

The planet’s star is a red dwarf, which is more prone to the kind of flares that might make it difficult for life to have evolved on its rocky satellite’s surface, unlike here in our own neighborhood. It orbits so closely to its star, too, that one year is just 19.5 of our days — but the star puts out significantly less heat than the Sun, so that’s actually right in the proper region to allow for the presence of liquid water.

Kepler-1649c was found by scientists digging into existing observations gathered by the Kepler space telescope before its retirement from operational status in 2018. An algorithm that was developed to go through the troves of data collected by the telescope and identify potential planets for further study failed to properly ID this one, but researchers noticed it when reviewing the information.

O,.o singularity here we come :3.


“It’s extremely exciting to see if it can turn up any algorithms that we haven’t even thought of yet, the impact of which to our daily lives may be enormous,” one computer expert told Newsweek.

The synthesis of plastic precursors, such as polymers, involves specialized catalysts. However, the traditional batch-based method of finding and screening the right ones for a given result consumes liters of solvent, generates large quantities of chemical waste, and is an expensive, time-consuming process involving multiple trials.

Ryan Hartman, professor of chemical and at the NYU Tandon School of Engineering, and his laboratory developed a lab-based “intelligent microsystem” employing , for modeling that shows promise for eliminating this costly process and minimizing environmental harm.

In their research, “Combining automated microfluidic experimentation with machine learning for efficient polymerization design,” published in Nature Machine Intelligence, the collaborators, including doctoral student Benjamin Rizkin, employed a custom-designed, rapidly prototyped microreactor in conjunction with automation and in situ infrared thermography to study exothermic (heat generating) polymerization—reactions that are notoriously difficult to control when limited experimental kinetic data are available. By pairing efficient microfluidic technology with machine learning algorithms to obtain high-fidelity datasets based on minimal iterations, they were able to reduce chemical waste by two orders of magnitude and catalytic discovery from weeks to hours.

Nowadays, artificial neural networks have an impact on many areas of our day-to-day lives. They are used for a wide variety of complex tasks, such as driving cars, performing speech recognition (for example, Siri, Cortana, Alexa), suggesting shopping items and trends, or improving visual effects in movies (e.g., animated characters such as Thanos from the movie Infinity War by Marvel).

Traditionally, algorithms are handcrafted to solve complex tasks. This requires experts to spend a significant amount of time to identify the optimal strategies for various situations. Artificial neural networks — inspired by interconnected neurons in the brain — can automatically learn from data a close-to-optimal solution for the given objective. Often, the automated learning or “training” required to obtain these solutions is “supervised” through the use of supplementary information provided by an expert. Other approaches are “unsupervised” and can identify patterns in the data. The mathematical theory behind artificial neural networks has evolved over several decades, yet only recently have we developed our understanding of how to train them efficiently. The required calculations are very similar to those performed by standard video graphics cards (that contain a graphics processing unit or GPU) when rendering three-dimensional scenes in video games.

Neuroscientists have just created an artificially intelligent algorithm that detects human brain activity and translates it into English sentences—and they said it was the first time such translations could be done on a 1:1 speed with natural human speech; faster-than-light.

Within a week, many world leaders went from downplaying the seriousness of coronavirus to declaring a state of emergency. Even the most efficacious of nations seem to be simultaneously confused and exasperated, with delayed responses revealing incompetence and inefficiency the world over.

So this begs the question: why is it so difficult for us to comprehend the scale of what an unmitigated global pandemic could do? The answer likely relates to how we process abstract concepts like exponential growth. Part of the reason we’ve struggled so much applying basic math to our practical environment is because humans think linearly. But like much of technology, biological systems such as viruses can grow exponentially.

As we scramble to contain and fight the pandemic, we’ve turned to technology as our saving grace. In doing so, we’ve effectively hit a “fast-forward” button on many tech trends that were already in place. From remote work and virtual events to virus-monitoring big data, technologies that were perhaps only familiar to a fringe tech community are now entering center stage—and as tends to be the case with wartime responses, these changes are likely here to stay.

Chip maker Intel has been chosen to lead a new initiative led by the U.S. military’s research wing, DARPA, aimed at improving cyber-defenses against deception attacks on machine learning models.

Machine learning is a kind of artificial intelligence that allows systems to improve over time with new data and experiences. One of its most common use cases today is object recognition, such as taking a photo and describing what’s in it. That can help those with impaired vision to know what’s in a photo if they can’t see it, for example, but it also can be used by other computers, such as autonomous vehicles, to identify what’s on the road.

But deception attacks, although rare, can meddle with machine learning algorithms. Subtle changes to real-world objects can, in the case of a self-driving vehicle, have disastrous consequences.

AI has revealed that mice have a range of facial expressions that show they feel — offering fresh clues about how emotional responses arise in human brains.

Scientists at the Max Planck Institute of Neurobiology in Germany made the discovery by recording the faces of lab mice when they were exposed to different stimuli, such as sweet flavors and electric shocks. The researchers then used machine learning algorithms to analyze how the rodents’ faces changed when they experienced different feelings.

Social roboticist, Heather Knight, sees robots and entertainment a research-rich coupling. So she programmed a charming humanoid robot named DATA with jokes, and equipped it with sensors and algorithmic capabilities to help with timing and gauging a crowd. Then Knight and DATA hit the road on an international robot stand-up comedy tour. Their act landed stage time at a TED conference and Knight was profiled in Forbes 30 Under 30. Watching Data perform is much like watching an amateur stand-up comedian cutting her/his chops at an open mic night doing light comedy with a sweet but wooden delivery.

Knight’s goal is specific: