Toggle light / dark theme

We learn from our personal interaction with the world, and our memories of those experiences help guide our behaviors. Experience and memory are inexorably linked, or at least they seemed to be before a recent report on the formation of completely artificial memories. Using laboratory animals, investigators reverse engineered a specific natural memory by mapped the brain circuits underlying its formation. They then “trained” another animal by stimulating brain cells in the pattern of the natural memory. Doing so created an artificial memory that was retained and recalled in a manner indistinguishable from a natural one.

Memories are essential to the sense of identity that emerges from the narrative of personal experience. This study is remarkable because it demonstrates that by manipulating specific circuits in the brain, memories can be separated from that narrative and formed in the complete absence of real experience. The work shows that brain circuits that normally respond to specific experiences can be artificially stimulated and linked together in an artificial memory. That memory can be elicited by the appropriate sensory cues in the real environment. The research provides some fundamental understanding of how memories are formed in the brain and is part of a burgeoning science of memory manipulation that includes the transfer, prosthetic enhancement and erasure of memory. These efforts could have a tremendous impact on a wide range of individuals, from those struggling with memory impairments to those enduring traumatic memories, and they also have broad social and ethical implications.

In the recent study, the natural memory was formed by training mice to associate a specific odor (cherry blossoms) with a foot shock, which they learned to avoid by passing down a rectangular test chamber to another end that was infused with a different odor (caraway). The caraway scent came from a chemical called carvone, while the cherry blossom scent came from another chemical, acetophenone. The researchers found that acetophenone activates a specific type of receptor on a discrete type of olfactory sensory nerve cell.

Inspired by the human eye, researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed an adaptive metalens that is essentially a flat, electronically controlled artificial eye. The adaptive metalens simultaneously controls for three of the major contributors to blurry images: focus, astigmatism, and image shift.

The research is published in Science Advances.

“This research combines breakthroughs in artificial muscle technology with metalens technology to create a tunable metalens that can change its focus in real time, just like the human eye,” said Alan She, an SEAS graduate student at the Graduate School of Arts and Sciences, and first author of the paper. “We go one step further to build the capability of dynamically correcting for aberrations such as astigmatism and image shift, which the human eye cannot naturally do.”

When speaking about robots, people tend to imagine a wide range of different machines: Pepper, a social robot from Softbank; Atlas, a humanoid that can do backflip made by Boston Dynamics; the cyborg assassin from the Terminator movies; and the lifelike figures that populate the television series — West World. People who are not familiar with the industry tend to hold polarized views. Either they have unrealistically high estimations of robots’ ability to mimic human-level intelligence or they underestimate the potential of new researches and technologies.

Over the past year, my friends in the venture, tech, and startup scenes have asked me what’s “actually” going on in deep reinforcement learning and robotics. The wonder: how are AI-enabled robots different from traditional ones? Do they have the potential to revolutionize various industries? What are their capabilities and limitations? These questions tell me how surprisingly challenging it can be to understand the current technological progress and industry landscape, let alone make predictions for the future. I am writing this article with a humble attempt to demystify AI, in particular, and deep reinforcement learning enabled robotics, topics that we hear a lot about but understand superficially or not at all. To begin, I’ll answer a basic question: what are AI-enabled robots and what makes them unique?

Harvard University researchers have developed a new powered exosuit that can make you feel as much as a dozen pounds lighter when walking or running. Scientific American reports that the 11-pound system, which is built around a pair of flexible shorts and a motor worn on the lower back, could benefit anyone who has to cover large distances by foot, including recreational hikers, military personnel, and rescue workers.

According to the researchers, who have published their findings in the journal Science, this system differs from previous exosuits because it’s able to make it easier to both walk and run. The challenge, as shown by a video accompanying the research, is that your legs work very differently depending on whether you’re walking or running. When walking, the team says your center of mass moves like an “inverted pendulum,” while running causes it to move like a “spring-mass system.” The system needs to be able to accommodate both of them, and sense when the wearer’s gait changes.

Technology that translates cortical activity into speech would be transformative for people unable to communicate as a result of neurological impairment. Decoding speech from neural activity is challenging because speaking requires extremely precise and dynamic control of multiple vocal tract articulators on the order of milliseconds. Here, we designed a neural decoder that explicitly leverages the continuous kinematic and sound representations encoded in cortical activity to generate fluent and intelligible speech. A recurrent neural network first decoded direct cortical recordings into vocal tract movement representations, and then transformed those representations to acoustic speech output. Modeling the articulatory dynamics of speech significantly enhanced performance with limited data. Naïve listeners were able to accurately identify and transcribe decoded sentences. Additionally, speech decoding was not only effective for audibly produced speech, but also when participants silently mimed speech. These results advance the development of speech neuroprosthetic technology to restore spoken communication in patients with disabling neurological disorders.

Evgeny became wider known to the Russian public in March, after becoming one of the first to implant a chip – between his thumb and forefinger – even though such surgical procedures are forbidden in Russia.


He sleeps two hours a night, plays guitar with a custom prosthesis, and has illegally implanted a microchip. When Evgeny Nekrasov was disfigured by an accident at 14, he decided to leverage future technology to build a new life.

Evgeny, now 21, has no recollection of “messing around” after school with his friends in hometown Vladivostok and picking up the gas canister that exploded in his hands and into his face.

But the days after he woke up without sight in hospital are hard-coded in his memory.

TOKYO — Cyborg technology to restore bodily functions that have declined due to aging, technology to eliminate industrial waste from the Earth’s environment, and artificial hibernation are among 25 areas the Japanese government aims to support, Nikkei has learned.

Tokyo will invite research proposals in these selected areas and choose which it will support for up to a decade, with a budget of 100 billion yen ($921 million) for the first five years, a government source said.

The research and development program aims to attract researchers in both Japan and abroad by demonstrating Tokyo’s enthusiasm in promoting ambitious scientific efforts to tackle major issues, including the declining birthrate and aging population, as well as to develop new industries around the technologies these efforts create.