Toggle light / dark theme

A new study shows that artificial intelligence networks based on human brain connectivity can perform cognitive tasks efficiently.

By examining MRI data from a large Open Science repository, researchers reconstructed a brain connectivity pattern, and applied it to an (ANN). An ANN is a computing system consisting of multiple input and output units, much like the biological brain. A team of researchers from The Neuro (Montreal Neurological Institute-Hospital) and the Quebec Artificial Intelligence Institute trained the ANN to perform a cognitive memory task and observed how it worked to complete the assignment.

This is a unique approach in two ways. Previous work on brain connectivity, also known as connectomics, focused on describing brain organization, without looking at how it actually performs computations and functions. Secondly, traditional ANNs have arbitrary structures that do not reflect how real brain networks are organized. By integrating brain connectomics into the construction of ANN architectures, researchers hoped to both learn how the wiring of the brain supports specific cognitive skills, and to derive novel design principles for artificial networks.

Dean’s appearance at TED comes during a time when critics—including current Google employees —are calling for greater scrutiny over big tech’s control over the world’s AI systems. Among those critics was one who spoke right after Dean at TED. Coder Xiaowei R. Wang, creative director of the indie tech magazine Logic, argued for community-led innovations. “Within AI there is only a case for optimism if people and communities can make the case themselves, instead of people like Jeff Dean and companies like Google making the case for them, while shutting down the communities [that] AI for Good is supposed to help,” she said. (AI for Good is a movement that seeks to orient machine learning toward solving the world’s most pressing social equity problems.)

TED curator Chris Andersen and Greg Brockman, co-founder of the AI ethics research group Open AI, also wrestled with the unintended consequences of powerful machine learning systems at the end of the conference. Brockman described a scenario in which humans serve as moral guides to AI. “We can teach the system the values we want, as we would a child,” he said. “It’s an important but subtle point. I think you do need the system to learn a model of the world. If you’re teaching a child, they need to learn what good and bad is.”

There also is room for some gatekeeping to be done once the machines have been taught, Anderson suggested. “One of the key issues to keeping this thing on track is to very carefully pick the people who look at the output of these unsupervised learning systems,” he said.

NASA’s Perseverance Mars rover recently attempted its first-ever sample collection of the Martian surface on August 6. However, data shows that while the rover’s drill successfully drilled into the surface, no regolith was collected in the sample tube.

Meanwhile, as Perseverance was preparing for the sample collection event, a team of researchers using ESA’s Mars Express orbiter found evidence that previously thought of lakes of water underneath Mars’ south pole might actually be made of clay.

Perseverance’s sample collection failure

On August 6 Perseverance’s 2-meter robotic arm lowered to the Martian surface, where a drill located at the end of the arm began carving into the local rock.

Lurking in the background of the quest for true quantum supremacy hangs an awkward possibility – hyper-fast number crunching tasks based on quantum trickery might just be a load of hype.

Now, a pair of physicists from École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland and Columbia University in the US have come up with a better way to judge the potential of near-term quantum devices – by simulating the quantum mechanics they rely upon on more traditional hardware.

Their study made use of a neural network developed by EPFL’s Giuseppe Carleo and his colleague Matthias Troyer back in 2,016 using machine learning to come up with an approximation of a quantum system tasked with running a specific process.

Natural language processing continues to find its way into unexpected corners. This time, it’s phishing emails. In a small study, researchers found that they could use the deep learning language model GPT-3, along with other AI-as-a-service platforms, to significantly lower the barrier to entry for crafting spearphishing campaigns at a massive scale.

Researchers have long debated whether it would be worth the effort for scammers to train machine learning algorithms that could then generate compelling phishing messages. Mass phishing messages are simple and formulaic, after all, and are already highly effective. Highly targeted and tailored “spearphishing” messages are more labor intensive to compose, though. That’s where NLP may come in surprisingly handy.

At the Black Hat and Defcon security conferences in Las Vegas this week, a team from Singapore’s Government Technology Agency presented a recent experiment in which they sent targeted phishing emails they crafted themselves and others generated by an AI-as-a-service platform to 200 of their colleagues. Both messages contained links that were not actually malicious but simply reported back clickthrough rates to the researchers. They were surprised to find that more people clicked the links in the AI-generated messages than the human-written ones—by a significant margin.