Toggle light / dark theme

DNA contains the genetic information that influences everything from eye color to illness and disorder susceptibility. Genes, which are around 20,000 pieces of DNA in the human body, perform various vital tasks in our cells. Despite this, these genes comprise up less than 2% of the genome. The remaining base pairs in the genome are referred to as “non-coding.” They include less well-understood instructions on when and where genes should be created or expressed in the human body.

DeepMind, in collaboration with their Alphabet colleagues at Calico, introduces Enformer, a neural network architecture that accurately predicts gene expression from DNA sequences.

Earlier studies on gene expression used convolutional neural networks as key building blocks. However, their accuracy and usefulness have been hampered by problems in modeling the influence of distal enhancers on gene expression. The proposed new method is based on Basenji2, a program that can predict regulatory activity from DNA sequences of up to 40,000 base pairs.

Early in 2,021 the Stanford Virtual Human Interaction Lab looked at the psychological consequences of spending long days videoconferencing and in virtual meetings. The popularized term “Zoom fatigue,” is the result of maxing out cognitive load and even reducing effectiveness. For all of that investment in remote work technology, senior managers feel there is very little payoff.

The University of North Carolina surveyed 182 senior managers and 65% of them felt meetings kept them from completing their own work, 71% felt meetings were inefficient and unproductive, and 64% felt meetings undercut deep thinking.

As technology-dependent remote workers proliferate, new solutions are coming to the fore that may make both in-person and virtual meetings more productive.

There’s a lot of excitement at the intersection of artificial intelligence and health care. AI has already been used to improve disease treatment and detection, discover promising new drugs, identify links between genes and diseases, and more.

By analyzing large datasets and finding patterns, virtually any new algorithm has the potential to help patients — AI researchers just need access to the right data to train and test those algorithms. Hospitals, understandably, are hesitant to share sensitive patient information with research teams. When they do share data, it’s difficult to verify that researchers are only using the data they need and deleting it after they’re done.

Secure AI Labs (SAIL) is addressing those problems with a technology that lets AI algorithms run on encrypted datasets that never leave the data owner’s system. Health care organizations can control how their datasets are used, while researchers can protect the confidentiality of their models and search queries. Neither party needs to see the data or the model to collaborate.

Researchers from Georgia Tech University’s Center for Human-Centric Interfaces and Engineering have created soft scalp electronics (SSE), a wearable wireless electro-encephalography (EEG) device for reading human brain signals. By processing the EEG data using a neural network, the system allows users wearing the device to control a video game simply by imagining activity.

Based on Transformers, our new architecture advances genetic research by improving the ability to predict how DNA sequence influences gene expression.

When the Human Genome Project succeeded in mapping the DNA sequence of the human genome, the international research community were excited by the opportunity to better understand the genetic instructions that influence human health and development. DNA carries the genetic information that determines everything from eye colour to susceptibility to certain diseases and disorders. The roughly 20,000 sections of DNA in the human body known as genes contain instructions about the amino acid sequence of proteins, which perform numerous essential functions in our cells. Yet these genes make up less than 2% of the genome. The remaining base pairs — which account for 98% of the 3 billion “letters” in the genome — are called “non-coding” and contain less well-understood instructions about when and where genes should be produced or expressed in the human body.

Cloud-based content management provider Box has announced a new “deep scan” functionality that checks files as they are uploaded to identify sophisticated malware and avert attacks.

The new capabilities constitute part of Box Shield, which uses machine learning to prevent data leaks, detect threats, and spot any kind of abnormal behavior. In April of last year, Box added a slew of automated malware detection features to the mix, allowing Box Shield customers to spot malicious content that may already have been uploaded to a Box account. However, so far this has leaned heavily on “known” threats from external intelligence databases. Moving forward, Box said it will mesh deep learning technology with external threat intelligence capabilities to analyze files for malicious scripts, macros, and executables to protect companies from zero-day (unknown) vulnerabilities.

When a user uploads an infected file, Box will quarantine it for inspection but will still allow the user to view a preview of the file and continue working.

Artificial intelligence is often thought of as disembodied: a mind like a program, floating in a digital void. But human minds are deeply intertwined with our bodies — and an experiment with virtual creatures performing tasks in simulated environments suggests that AI may benefit from having a mind-body setup.

Stanford scientists were curious about the physical-mental interplay in our own evolution from blobs to tool-using apes. Could it be that the brain is influenced by the capabilities of the body and vice versa? It has been suggested before — over a century ago, in fact — and certainly it’s obvious that with a grasping hand one learns more quickly to manipulate objects than with a less differentiated appendage.

It’s hard to know whether the same could be said for an AI, since their development is more structured. Yet the questions such a concept brings up are compelling: Could an AI better learn and adapt to the world if it has evolved to do so from the start?