Toggle light / dark theme

Creating human-like AI is about more than mimicking human behavior — technology must also be able to process information, or ‘think’, like humans too if it is to be fully relied upon. New research, published in the journal Patterns and led by the University of Glasgow’s School of Psychology…


Magnetic solids can be demagnetized quickly with a short laser pulse, and there are already so-called HAMR (Heat Assisted Magnetic Recording) memories on the market that function according to this principle. However, the microscopic mechanisms of ultrafast demagnetization remain unclear. Now, a team at HZB has developed a new method at BESSY II to quantify one of these mechanisms and they have applied it to the rare-earth element Gadolinium, whose magnetic properties are caused by electrons on both the 4f and the 5d shells. This study completes a series of experiments done by the team on nickel and iron-nickel alloys. Understanding these mechanisms is useful for developing ultrafast data storage devices.

In 2,021 Instagram will be the most popular social media platform. Recent statistics show that the platform now boasts over 1 billion monthly active users. With this many eyes on their content, influencers can reap great rewards through sponsored posts if they have a large enough following with this many eyes on their content. The question for today then becomes: How do we effectively grow our Instagram account in the age of algorithmic bias? Instagram expert and AI growth specialist Faisal Shafique help us answer this question utilizing his experience growing his @fact account to about 8M followers while also helping major, edgy brands like Fashion Nova to over 20M.

Full Story:

Using machine learning, a computer model can teach itself to smell in just a few minutes. When it does, researchers have found, it builds a neural network that closely mimics the olfactory circuits that animal brains use to process odors.

Animals from fruit flies to humans all use essentially the same strategy to process olfactory information in the brain. But neuroscientists who trained an artificial neural network to take on a simple odor classification task were surprised to see it replicate biology’s strategy so faithfully.

Full Story:


When asked to classify odors, artificial neural networks adopt a structure that closely resembles that of the brain’s olfactory circuitry.

If the properties of materials can be reliably predicted, then the process of developing new products for a huge range of industries can be streamlined and accelerated. In a study published in Advanced Intelligent Systems, researchers from The University of Tokyo Institute of Industrial Science used core-loss spectroscopy to determine the properties of organic molecules using machine learning.

The spectroscopy techniques energy loss near-edge structure (ELNES) and X-ray near-edge structure (XANES) are used to determine information about the electrons, and through that the atoms, in materials. They have high sensitivity and high resolution and have been used to investigate a range of materials from electronic devices to drug delivery systems.

However, connecting spectral data to the properties of a material—things like optical properties, electron conductivity, density, and stability—remains ambiguous. Machine learning (ML) approaches have been used to extract information for large complex sets of data. Such approaches use artificial neural networks, which are based on how our brains work, to constantly learn to solve problems. Although the group previously used ELNES/XANES spectra and ML to find out information about materials, what they found did not relate to the properties of the material itself. Therefore, the information could not be easily translated into developments.

While the pandemic is still raging, the chaos of the past 18 months has calmed a bit, and the dust is starting to settle. Now the time has come for healthcare CIOs and other health IT leaders to look forward and plan their IT investments – shaped, in no small part, by the lessons of the recent past.

1:42 Are we on the wrong train to AGI?
4:20 Marvin Minsky and AI generalization problem.
11:57 Defining intelligence in AI
17:17 Is AI masquerading as a trendy statistical analysis tool?
23:35 AI systems lack our most basic intuitions.
27:38 The public not wanting to face Reality.
29:36 Equipping AI with Kant’s categories of the mind (Time, Space, Causality)
33:40 Neural nets VS traditional tools.
34:50 Causality in AI
37:14 Lack of interdisciplinary learning.
45:54 How can we achieve human level of understanding in AI?
49:21 More limitations.
59:35 Motivation in inanimate systems.
1:01:31 Lack of body and transcendent consciousness.
1:05:55 What interdisciplinary learning would you encourage?
1:06:49 Book recommendations.

Gary Marcus is CEO and Founder of Robust AI, well-known machine learning scientist and entrepreneur, author, and Professor Emeritus at New York State University.

Dr. Marcus attended Hampshire College, where he designed his own major, cognitive science, working on human reasoning. He continued on to graduate school at Massachusetts Institute of Technology, where his advisor was the experimental psychologist Steven Pinker. He received his Ph.D. in 1993.

His books include The Algebraic Mind: Integrating Connectionism and Cognitive Science, The Birth of the Mind: How a Tiny Number of Genes Creates the Complexities of Human Thought, Kluge: The Haphazard Construction of the Human Mind, a New York Times Editors’ Choice, and Guitar Zero, which appeared on the New York Times Bestseller list. He edited The Norton Psychology Reader, and was co-editor with Jeremy Freeman of The Future of the Brain: Essays by the World’s Leading Neuroscientist, which included Nobel Laureates May-Britt Moser and Edvard Moser. Together with Ernie Davis, he authored Rebooting AI and is well known to deconstruct myths of the AI community.

In 2,014 he founded Geometric Intelligence, a machine learning company. It was acquired by Uber in 2016. In 2,019 he founded Robust AI and acts currently as Robust AI’s CEO.

Links:
http://rebooting.ai.
https://arxiv.org/abs/2002.

Neural Implant Podcast w/ Ladan Jiracek: https://open.spotify.com/show/7qzl8f0yllPaYlBmW9CX3u?si=aXiWglMkR8Wkw8YGLD81nQ

Join this channel to get access to perks:
https://www.youtube.com/channel/UCDukC60SYLlPwdU9CWPGx9Q/join.

Neura Pod is a series covering topics related to Neuralink, Inc. Topics such as brain-machine interfaces, brain injuries, and artificial intelligence will be explored. Host Ryan Tanaka synthesizes informationopinions, and conducts interviews to easily learn about Neuralink and its future.

Most people aren’t aware of what the company does, or how it does it. If you know other people who are curious about what Neuralink is doing, this is a nice summary episode to share. Tesla, SpaceX, and the Boring Company are going to have to get used to their newest sibling. Neuralink is going to change how humans think, act, learn, and share information.

Neura Pod:
- Twitter: https://twitter.com/NeuraPod.
- Patreon: https://www.patreon.com/neurapod.
- Medium: https://neurapod.medium.com/
- Spotify: https://open.spotify.com/show/2hqdVrReOGD6SZQ4uKuz7c.
- Instagram: https://www.instagram.com/NeuraPodcast.
- Facebook: https://www.facebook.com/NeuraPod.
- Tiktok: https://www.tiktok.com/@neurapod.

Opinions are my own. Neura Pod receives no compensation from Neuralink and has no formal affiliations with the company. I own Tesla stock and/or derivatives.

Convolutional neural networks running on quantum computers have generated significant buzz for their potential to analyze quantum data better than classical computers can. While a fundamental solvability problem known as “barren plateaus” has limited the application of these neural networks for large data sets, new research overcomes that Achilles heel with a rigorous proof that guarantees scalability.

“The way you construct a quantum neural can lead to a barren plateau—or not,” said Marco Cerezo, co-author of the paper titled “Absence of Barren Plateaus in Quantum Convolutional Neural Networks,” published today by a Los Alamos National Laboratory team in Physical Review X. Cerezo is a physicist specializing in , , and at Los Alamos. “We proved the absence of barren plateaus for a special type of quantum neural network. Our work provides trainability guarantees for this architecture, meaning that one can generically train its parameters.”

As an (AI) methodology, quantum are inspired by the visual cortex. As such, they involve a series of convolutional layers, or filters, interleaved with pooling layers that reduce the dimension of the data while keeping important features of a data set.

A truck fleet accident costs an average of $16,500 in damages and $57,500 in injury-related costs for a total of $74,000. “This does not include a broad range of ‘hidden’ costs, including reduced vehicle value (typically anywhere from $500 to $2,000), higher insurance premium, legal fees, driver turnover (the average driver replacement cost = $8,200), lost employee time, lost vehicle-use time, administrative burden, reduced employee morale and bad publicity,” said Yoav Banin, chief product officer at Nauto, which provides artificial intelligence driver and fleet performance solutions.

Emphasis on truck driving safety is well placed, considering other challenges that the trucking industry is facing.

Ranking first is a chronic shortage of truck drivers nationwide that could force fleet operators to hire less-experienced drivers who require operator and safety training. Driver compensation and truck parking ranked second and third, but immediately behind them in fourth and fifth position were driver truck fleet safety and insurance availability, which depends on safe driving records.

Artificial intelligence expert Timnit Gebru on the challenges researchers can face at Big Tech companies, and how to protect workers and their research.

Artificial intelligence research leads to new cutting-edge technologies, but it’s expensive. Big Tech companies, which are powered by AI and have deep pockets, often take on this work — but that gives them the power to censor or impede research that casts them in an unfavorable light, according to Timnit Gebru, a computer scientist, co-founder of the nonprofit organization Black in AI and the former co-leader of Google’s Ethical AI team.

The situation imperils both the rights of AI workers at those companies and the quality of research that is shared with the public, said Gebru, speaking at the recent EmTech MIT conference hosted by MIT Technology Review.