Toggle light / dark theme

A new type of supply chain attack unveiled last month is targeting more and more companies, with new rounds this week taking aim at Microsoft, Amazon, Slack, Lyft, Zillow, and an unknown number of others. In weeks past, Apple, Microsoft, Tesla, and 32 other companies were targeted by a similar attack that allowed a security researcher to execute unauthorized code inside their networks.

The latest attack against Microsoft was also carried out as a proof-of-concept by a researcher. Attacks targeting Amazon, Slack, Lyft, and Zillow, by contrast, were malicious, but it’s not clear if they succeeded in executing the malware inside their networks. The npm and PyPi open source code repositories, meanwhile, have been flooded with more than 5000 proof-of-concept packages, according to Sonatype, a firm that helps customers secure the applications they develop.

“Given the daily volume of suspicious npm packages being picked up by Sonatype’s automated malware detection systems, we only expect this trend to increase, with adversaries abusing dependency confusion to conduct even more sinister activities,” Sonatype researcher Ax Sharma wrote earlier this week.

Materials capable of performing complex functions in response to changes in the environment could form the basis for exciting new technologies. Think of a capsule implanted in your body that automatically releases antibodies in response to a virus, a surface that releases an antibacterial agent when exposed to dangerous bacteria, a material that adapts its shape when it needs to sustain a particular weight, or clothing that senses and captures toxic contaminants from the air.

Scientists and engineers have already taken the first step toward these types of autonomous materials by developing “active” materials that have the ability to move on their own. Now, researchers at the University of Chicago have taken the next step by showing that the movement in one such active material—liquid crystals—can be harnessed and directed.

This proof-of-concept research, published on February 182021, in the journal Nature Materials, is the result of three years of collaborative work by the groups of Juan de Pablo, Liew Family Professor of Molecular Engineering, and Margaret Gardel, Horace B. Horton Professor of Physics and Molecular Engineering, along with Vincenzo Vitelli, professor of physics, and Aaron Dinner, professor of chemistry.

Most of this automation is being done by companies you’ve probably never heard of. UiPath, the largest stand-alone automation firm, is valued at $35 billion — roughly the size of eBay — and is slated to go public later this year. Other companies like Automation Anywhere and Blue Prism, which have Fortune 500 companies like Coca-Cola and Walgreens Boots Alliance as clients, are also enjoying breakneck growth, and tech giants like Microsoft have recently introduced their own automation products to get in on the action.


Workers with college degrees and specialized training once felt relatively safe from automation. They aren’t.

The legal rights of robots have expanded, at least in Pennsylvania. There, autonomous delivery drones will be allowed to maneuver on sidewalks and paths as well as roadways and will now technically be considered “pedestrians.” It’s the latest change in the evolving relationship between autonomous vehicles and humans.

Researchers affiliated with Nvidia and Harvard today detailed AtacWorks, a machine learning toolkit designed to bring down the cost and time needed for rare and single-cell experiments. In a study published in the journal Nature Communications, the coauthors showed that AtacWorks can run analyses on a whole genome in just half an hour compared with the multiple hours traditional methods take.

Most cells in the body carry around a complete copy of a person’s DNA, with billions of base pairs crammed into the nucleus. But an individual cell pulls out only the subsection of genetic components that it needs to function, with cell types like liver, blood, or skin cells using different genes. The regions of DNA that determine a cell’s function are easily accessible, more or less, while the rest are shielded around proteins.

AtacWorks, which is available from Nvidia’s NGC hub of GPU-optimized software, works with ATAC-seq, a method for finding open areas in the genome in cells pioneered by Harvard professor Jason Buenrostro, one of the paper’s coauthors. ATAC-seq measures the intensity of a signal at every spot on the genome. Peaks in the signal correspond to regions with DNA such that the fewer cells available, the noisier the data appears, making it difficult to identify which areas of the DNA are accessible.

Summary: Combining brain activity data with artificial intelligence, researchers generated faces based upon what individuals considered to be attractive features.

Source: University of Helsinki.

Researchers at the University of Helsinki and University of Copenhagen investigated whether a computer would be able to identify the facial features we consider attractive and, based on this, create new images matching our criteria. The researchers used artificial intelligence to interpret brain signals and combined the resulting brain-computer interface with a generative model of artificial faces. This enabled the computer to create facial images that appealed to individual preferences.

Researchers at the University of California San Diego School of Medicine have shown that they can block inflammation in mice, thereby protecting them from liver disease and hardening of the arteries while increasing their healthy lifespan.


Researchers have succeeded in making an AI understand our subjective notions of what makes faces attractive. The device demonstrated this knowledge by its ability to create new portraits that were tailored to be found personally attractive to individuals. The results can be used, for example, in modeling preferences and decision-making as well as potentially identifying unconscious attitudes.

Researchers at the University of Helsinki and University of Copenhagen investigated whether a computer would be able to identify the facial features we consider attractive and, based on this, create new images matching our criteria. The researchers used to interpret and combined the resulting brain-computer interface with a generative model of artificial faces. This enabled the computer to create facial images that appealed to individual preferences.

“In our previous studies, we designed models that could identify and control simple portrait features, such as hair color and emotion. However, people largely agree on who is blond and who smiles. Attractiveness is a more challenging subject of study, as it is associated with cultural and that likely play unconscious roles in our individual preferences. Indeed, we often find it very hard to explain what it is exactly that makes something, or someone, beautiful: Beauty is in the eye of the beholder,” says Senior Researcher and Docent Michiel Spapé from the Department of Psychology and Logopedics, University of Helsinki.