Toggle light / dark theme

A team at Flinders University in South Australia has developed a new vaccine believed to be the first human drug in the world to be completely designed by artificial intelligence (AI).

While drugs have been designed using computers before, this vaccine went one step further being independently created by an AI program called SAM (Search Algorithm for Ligands).

Flinders University Professor Nikolai Petrovsky who led the development told Business Insider Australia its name is derived from what it was tasked to do: search the universe for all conceivable compounds to find a good human drug (also called a ligand).

Auditory stimulus reconstruction is a technique that finds the best approximation of the acoustic stimulus from the population of evoked neural activity. Reconstructing speech from the human auditory cortex creates the possibility of a speech neuroprosthetic to establish a direct communication with the brain and has been shown to be possible in both overt and covert conditions. However, the low quality of the reconstructed speech has severely limited the utility of this method for brain-computer interface (BCI) applications. To advance the state-of-the-art in speech neuroprosthesis, we combined the recent advances in deep learning with the latest innovations in speech synthesis technologies to reconstruct closed-set intelligible speech from the human auditory cortex. We investigated the dependence of reconstruction accuracy on linear and nonlinear (deep neural network) regression methods and the acoustic representation that is used as the target of reconstruction, including auditory spectrogram and speech synthesis parameters. In addition, we compared the reconstruction accuracy from low and high neural frequency ranges. Our results show that a deep neural network model that directly estimates the parameters of a speech synthesizer from all neural frequencies achieves the highest subjective and objective scores on a digit recognition task, improving the intelligibility by 65% over the baseline method which used linear regression to reconstruct the auditory spectrogram. These results demonstrate the efficacy of deep learning and speech synthesis algorithms for designing the next generation of speech BCI systems, which not only can restore communications for paralyzed patients but also have the potential to transform human-computer interaction technologies.

Since its invention by a Hungarian architect in 1974, the Rubik’s Cube has furrowed the brows of many who have tried to solve it, but the 3D logic puzzle is no match for an artificial intelligence system created by researchers at the University of California, Irvine.

DeepCubeA, a learning algorithm programmed by UCI scientists and mathematicians, can find the solution in a fraction of a second, without any specific domain knowledge or in-game coaching from humans. This is no simple task considering that the cube has completion paths numbering in the billions but only one goal state—each of six sides displaying a solid color—which apparently can’t be found through random moves.

For a study published today in Nature Machine Intelligence, the researchers demonstrated that DeepCubeA solved 100 percent of all test configurations, finding the to the goal state about 60 percent of the time. The algorithm also works on other combinatorial games such as the sliding tile , Lights Out and Sokoban.

https://www.youtube.com/watch?v=ytva8DDV_Ic

What Is Big Data? & How Big Data Is Changing The World! https://www.facebook.com/singularityprosperity/videos/439181406563439/


In this video, we’ll be discussing big data – more specifically, what big data is, the exponential rate of growth of data, how we can utilize the vast quantities of data being generated as well as the implications of linked data on big data.

[0:30–7:50] — Starting off we’ll look at, how data has been used as a tool from the origins of human evolution, starting at the hunter-gatherer age and leading up to the present information age. Afterwards, we’ll look into many statistics demonstrating the exponential rate of growth and future growth of data.

[7:50–18:55] — Following that we’ll discuss, what exactly big data is and delving deeper into the types of data, structured and unstructured and how they will be analyzed both by humans and machine learning (AI).

Despite their names, artificial intelligence technologies and their component systems, such as artificial neural networks, don’t have much to do with real brain science. I’m a professor of bioengineering and neurosciences interested in understanding how the brain works as a system – and how we can use that knowledge to design and engineer new machine learning models.

In recent decades, brain researchers have learned a huge amount about the physical connections in the brain and about how the nervous system routes information and processes it. But there is still a vast amount yet to be discovered.

At the same time, computer algorithms, software and hardware advances have brought machine learning to previously unimagined levels of achievement. I and other researchers in the field, including a number of its leaders, have a growing sense that finding out more about how the brain processes information could help programmers translate the concepts of thinking from the wet and squishy world of biology into all-new forms of machine learning in the digital world.

It sounds like science fiction: a device that can reconnect a paralyzed person’s brain to his or her body. But that’s exactly what the experimental NeuroLife system does. Developed by Battelle and Ohio State University, NeuroLife uses a brain implant, an algorithm and an electrode sleeve to give paralysis patients back control of their limbs. For Ian Burkhart, NeuroLife’s first test subject, the implications could be life-changing.

Featured in this episode:

Batelle:
https://www.battelle.org/

Ohio State University
https://wexnermedical.osu.edu/

Producer and Editor — Alan Jeffries
Camera — Zach Frankart, Alan Jeffries
Sound Recordist — Brandon MacLean
Graphics — Sylvia Yang
Animators — Ricardo Mendes, James Hazael, Andrew Embury.
Sound Mix and Design — Cadell Cook.

A new method enables researchers to test algorithms for spotting genes that contribute to a complex trait or condition, such as autism.

Researchers often study the genetics of complex traits using genome-wide association studies (GWAS). In these studies, scientists compare the genomes of people who have a condition with those of people without the condition, looking for genetic variants likely to contribute to the condition. These studies often require tens of thousands of people to yield statistically significant results.

GWAS have identified more than 100 genomic regions associated with schizophrenia, for example, and 12 linked to autism. Results are often difficult to interpret, however. Causal variants for a condition may be inherited with nearby sections of DNA that do not play a role.

The Defense Advanced Research Projects Agency made headlines last fall when it announced that it was pledging $2 billion for a multi-year effort to develop new artificial intelligence technology.

Months later, DARPA’s “AI Next” program is already bearing fruit, said Peter Highnam, the agency’s deputy director.

DARPA — which has for decades fostered some of the Pentagon’s most cutting-edge capabilities — breaks down AI technology development into three distinct waves, he said during a meeting with reporters in Washington, D.C.

Irina Kareva translates biology into mathematics and vice versa. She writes mathematical models that describe the dynamics of cancer, with the goal of developing new drugs that target tumors. “The power and beauty of mathematical modeling lies in the fact that it makes you formalize, in a very rigorous way, what we think we know,” Kareva says. “It can help guide us to where we should keep looking, and where there may be a dead end.” It all comes down to asking the right question and translating it to the right equation, and back.

Thanks to a $1.5 million grant from the National Science Foundation, a group of Virginia Tech engineers hopes to redefine these search and rescue protocols by teaming up human searchers with unmanned aerial robots, or drones.

In efforts led by Ryan Williams, an assistant professor in the Bradley Department of Electrical and Computer Engineering within the College of Engineering, these drones will use autonomous algorithms and machine learning to complement search and rescue efforts from the air. The drones will also suggest tasks and send updated information to human searchers on the ground.

Using mathematical models based on historical data that reflect what lost people actually do combined with typical searcher behavior, the researchers hope this novel approach of balancing autonomy with human collaboration can make searches more effective. The team has received support from the Virginia Department of Emergency Management and will work closely with the local Black Diamond Search and Rescue Council throughout the project.