Elon Musk warns that real androids are coming. Musk is one of the world wealthiest men and a tech entrepreneur. Some fear an impending catastrophe as a result of robots produced with artificial intelligence. Others speculate he is developing Neuralink to offer humans a fighting chance. Musk response video comes from Cornwall-based Engineered Arts.
Category: robotics/AI
Said the description for one app on online stores, offering users the chance to create AI-generated synthetic media, also known as deepfakes.
Dec 13 (Reuters) — “Do you want to see yourself acting in a movie or on TV?” said the description for one app on online stores, offering users the chance to create AI-generated synthetic media, also known as deepfakes.
“Do you want to see your best friend, colleague, or boss dancing?” it added. “Have you ever wondered how would you look if your face swapped with your friend’s or a celebrity’s?”
The same app was advertised differently on dozens of adult sites: “Make deepfake porn in a sec,” the ads said. “Deepfake anyone.”
Department of Energy Announces $5.7 Million for Research on Artificial Intelligence and Machine Learning (AI/ML) for Nuclear Physics Accelerators and Detectors
Posted in information science, particle physics, robotics/AI | Leave a Comment on Department of Energy Announces $5.7 Million for Research on Artificial Intelligence and Machine Learning (AI/ML) for Nuclear Physics Accelerators and Detectors
WASHINGTON, D.C. — Today, the U.S. Department of Energy (DOE) announced $5.7 million for six projects that will implement artificial intelligence methods to accelerate scientific discovery in nuclear physics research. The projects aim to optimize the overall performance of complex accelerator and detector systems for nuclear physics using advanced computational methods.
“Artificial intelligence has the potential to shorten the timeline for experimental discovery in nuclear physics,” said Timothy Hallman, DOE Associate Director of Science for Nuclear Physics. “Particle accelerator facilities and nuclear physics instrumentation face a variety of technical challenges in simulations, control, data acquisition, and analysis that artificial intelligence holds promise to address.”
The six projects will be conducted by nuclear physics researchers at five DOE national laboratories and four universities. Projects will include the development of deep learning algorithms to identify a unique signal for a conjectured, very slow nuclear process known as neutrinoless double beta decay. This decay, if observed, would be at least ten thousand times more rare than the rarest known nuclear decay and could demonstrate how our universe became dominated by matter rather than antimatter. Supported efforts also include AI-driven detector design for the Electron-Ion Collider accelerator project under construction at Brookhaven National Laboratory that will probe the internal structure and forces of protons and neutrons that compose the atomic nucleus.
Novel theorem demonstrates convolutional neural networks can always be trained on quantum computers, overcoming threat of ‘barren plateaus’ in optimization problems.
Convolutional neural networks running on quantum computers have generated significant buzz for their potential to analyze quantum data better than classical computers can. While a fundamental solvability problem known as “barren plateaus” has limited the application of these neural networks for large data sets, new research overcomes that Achilles heel with a rigorous proof that guarantees scalability.
“The way you construct a quantum neural network can lead to a barren plateau—or not,” said Marco Cerezo, coauthor of the paper titled “Absence of Barren Plateaus in Quantum Convolutional Neural Networks,” published recently by a Los Alamos National Laboratory team in Physical Review X. Cerezo is a physicist specializing in quantum computing 0, quantum machine learning, and quantum information at Los Alamos. “We proved the absence of barren plateaus for a special type of quantum neural network. Our work provides trainability guarantees for this architecture, meaning that one can generically train its parameters.”
But wait, should we believe it?
An artificial intelligence warning AI researchers about the dangers of AI sounds like the setup of a delightful B movie, but truth is often stranger than fiction.
A professor and a fellow at the University of Oxford came face to face with that reality when they invited an AI to participate in a debate at the Oxford Union on, you guessed it, the ethics of AI. Specifically, as Dr. Alex Connock and Professor Andrew Stephen explain in the Conversation, the prompt was “This house believes that AI will never be ethical.” The AI, it seems, agreed.
“AI will never be ethical,” argued the Megatron-Turing Natural Language Generation model, which was notably trained on Wikipedia, Reddit, and millions of English-language news articles published between 2016 and 2019. “It is a tool, and like any tool, it is used for good and bad.”
The company belatedly gets into the race to build bigger, better language models despite ethical concerns.
A team at Harvard has documented a new state of matter which could advance quantum technology.
The same Saildrones captured the first-ever video from inside a major hurricane from sea level in September.
Six autonomous Saildrones are taking off on a six-month journey to tackle some of Earth’s most challenging ocean conditions, in order to improve climate change and weather forecast computer models, reported CNN.
They will travel to the Gulf Stream throughout the winter months where they will collect data about the process by which oceans absorb carbon (carbon uptake). So far, the numbers on this type of activity have only been estimates produced by statistical methods that cannot, therefore, be relied upon.
“This Saildrone mission will collect more carbon dioxide measurements in the Gulf Stream region in winter than has ever been collected in this location and time of year,” said Jaime Palter, a scientist at the University of Rhode Island who is co-leading the research.
“With this data, we will sharpen our quantification of ocean carbon uptake and the processes that enable that uptake in this dynamic region.”
This work will be crucial to properly estimate the Global Carbon Budget.
Full Story:
Algorithms are now essential for making predictions, boosting efficiency, and optimizing in real-time.