Toggle light / dark theme

Summary: A new AI algorithm can predict the onset of Alzheimer’s disease with an accuracy of over 99% by analyzing fMRI brain scans.

Source: Kaunas University of Technology.

Researchers from Kaunas University, Lithuania developed a deep learning-based method that can predict the possible onset of Alzheimer’s disease from brain images with an accuracy of over 99 percent. The method was developed while analyzing functional MRI images obtained from 138 subjects and performed better in terms of accuracy, sensitivity, and specificity than previously developed methods.

The cashierless technology shift continues apace with today’s news that Zippin has raised $30 million in a series B round of funding. The San Francisco-based company is one of several players in the space to gain traction for a technology that seeks to not only make supermarket queues obsolete, but also generate big data insights for retailers.

Founded in 2,018 Zippin leverages AI, cameras, and smart shelf sensors to enable shoppers to place items in their cart and walk out without waiting. The company opened its first checkout-free store in San Francisco back in 2018, and it has since entered into partnerships with the likes of Aramark, Sberbank, and the Sacramento Kings’ Golden 1 Center to power cashierless stores globally.

Zippin had previously raised around $15 million, and with another $30 million from SAP, Maven Ventures, Evolv Ventures, and OurCrowd, the company is well-financed to capitalize on the retail industry’s continued push toward automation-powered efficiency. The company said its ultimate goal is to retrofit stores with the required technology inside a day, with minimal downtime for retailers.

Integrated Information Theory is one of the leading models of consciousness. It aims to describe both the quality and quantity of the conscious experience of a physical system, such as the brain, in a particular state. In this contribution, we propound the mathematical structure of the theory, separating the essentials from auxiliary formal tools. We provide a definition of a generalized IIT which has IIT 3.0 of Tononi et al., as well as the Quantum IIT introduced by Zanardi et al. as special cases. This provides an axiomatic definition of the theory which may serve as the starting point for future formal investigations and as an introduction suitable for researchers with a formal background.

Integrated Information Theory (IIT), developed by Giulio Tononi and collaborators [5, 45–47], has emerged as one of the leading scientific theories of consciousness. At the heart of the latest version of the theory [19, 25 26, 31 40] is an algorithm which, based on the level of integration of the internal functional relationships of a physical system in a given state, aims to determine both the quality and quantity (‘Φ value’) of its conscious experience.

The Human Cell Atlas is the world’s largest, growing single-cell reference atlas. It contains references of millions of cells across tissues, organs and developmental stages. These references help physicians to understand the influences of aging, environment and disease on a cell—and ultimately diagnose and treat patients better. Yet, reference atlases do not come without challenges. Single-cell datasets may contain measurement errors (batch effect), the global availability of computational resources is limited and the sharing of raw data is often legally restricted.

Researchers from Helmholtz Zentrum München and the Technical University of Munich (TUM) developed a novel called “scArches,” short for single-cell architecture surgery. The biggest advantage: “Instead of sharing raw data between clinics or research centers, the algorithm uses transfer learning to compare new from single-cell genomics with existing references and thus preserves privacy and anonymity. This also makes annotating and interpreting of new data sets very easy and democratizes the usage of single-cell reference atlases dramatically,” says Mohammad Lotfollahi, the leading scientist of the algorithm.

Russian start-up NTechLab has released FindFace Multi, a detection technology that uses an advanced algorithm to recognize not only faces, but also bodies of people and cars. This is an update to the company’s flagship product and is able to support numerous video streams and facial database entries.

Body recognition allows FindFace Multi users to count and search people moving through an environment as well as identifying individuals and tracking movements. The algorithm also takes into account markers such as height, color of clothes and accessories.

The vehicle recognition function determines the body type, color, manufacturer, and model of a car, as well as searching by license plate. Even if license plates, or parts of the vehicle are not visible or obscured, the system can still identify a car.

Wide Area Networks (WANs), the global backbones and workhorses of today’s internet that connect billions of computers over continents and oceans, are the foundation of modern online services. As COVID-19 has placed a vital reliance on online services, today’s networks are struggling to deliver high bandwidth and availability imposed by emerging workloads related to machine learning, video calls, and health care.

To connect WANs over hundreds of miles, fiber optic cables that transmit data using light are threaded throughout our neighborhoods, made of incredibly thin strands of glass or plastic known as optical fibers. While they’re extremely fast, they’re not always reliable: They can easily break from weather, thunderstorms, accidents, and even animals. These tears can cause severe and expensive damage, resulting in 911 service outages, lost connectivity to the internet, and inability to use smartphone apps.

Scientists from the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and from Facebook recently came up with a way to preserve the network when the fiber is down, and to reduce cost. Their system, called ARROW, reconfigures the optical light from a damaged fiber to healthy ones, while using an online algorithm to proactively plan for potential fiber cuts ahead of time, based on real-time internet traffic demands.

“The network learned to find fundamental concepts that are key to molecular structure formation, but without explicitly being told to,” Townshend added. “The exciting aspect is that the algorithm has clearly recovered things that we knew were important, but it has also recovered characteristics that we didn’t know about before.”

Having shown success with proteins, the researchers turned their attention to RNA molecules. The researchers tested their algorithm in a series of “RNA Puzzles” from a longstanding competition in their field, and in every case, the tool outperformed all the other puzzle participants and did so without being designed specifically for RNA structures.

“We introduce a machine learning approach that enables identification of accurate structural models without assumptions about their defining characteristics, despite being trained with only 18 known RNA structures,” the authors of the Science article wrote. “The resulting scoring function, the Atomic Rotationally Equivariant Scorer (ARES), substantially outperforms previous methods and consistently produces the best results in community-wide blind RNA structure prediction challenges.”

Analog photonic solutions offer unique opportunities to address complex computational tasks with unprecedented performance in terms of energy dissipation and speeds, overcoming current limitations of modern computing architectures based on electron flows and digital approaches.

In a new study published on August 26 2021, in the journal Nature Communications Physics, researchers led by Volker Sorger, an associate professor of electrical and computer engineering at the George Washington University, reveal a new nanophotonic analog processor capable of solving partial differential equations. This nanophotonic processor can be integrated at chip-scale, processing arbitrary inputs at the speed of light.

The research team also included researchers at the University of California, Los Angeles, and City College of New York.

An international research team with participants from several universities including the FAU has proposed a standardized registry for artificial intelligence (AI) work in biomedicine to improve the reproducibility of results and create trust in the use of AI algorithms in biomedical research and, in the future, in everyday clinical practice. The scientists presented their proposal in the journal Nature Methods.

In the last decades, new technologies have made it possible to develop a wide variety of systems that can generate huge amounts of biomedical data, for example in cancer research. At the same time, completely new possibilities have developed for examining and evaluating this data using methods. AI algorithms in intensive care units, e.g., can predict circulatory failure at an early stage based on large amounts of data from several monitoring systems by processing a lot of complex information from different sources at the same time, which is far beyond human capabilities.

This great potential of AI systems leads to an unmanageable number of biomedical AI applications. Unfortunately, the corresponding reports and publications do not always adhere to best practices or provide only incomplete information about the algorithms used or the origin of the data. This makes assessment and comprehensive comparisons of AI models difficult. The decisions of AIs are not always comprehensible to humans and results are seldomly fully reproducible. This situation is untenable, especially in clinical research, where trust in AI models and transparent research reports are crucial to increase the acceptance of AI algorithms and to develop improved AI methods for basic biomedical research.

Determining the 3D shapes of biological molecules is one of the hardest problems in modern biology and medical discovery. Companies and research institutions often spend millions of dollars to determine a molecular structure—and even such massive efforts are frequently unsuccessful.

Using clever, new machine learning techniques, Stanford University Ph.D. students Stephan Eismann and Raphael Townshend, under the guidance of Ron Dror, associate professor of computer science, have developed an approach that overcomes this problem by predicting accurate structures computationally.

Most notably, their approach succeeds even when learning from only a few known structures, making it applicable to the types of whose structures are most difficult to determine experimentally.