Toggle light / dark theme

# On the Opportunities and.
Risks of Foundation Models.

AI is undergoing a paradigm shift with the rise of models (e.g., BERT, DALL-E, GPT-3) that are trained on broad data at scale and are adaptable to a wide range of downstream tasks. We call these models foundation models to underscore their critically central yet incomplete character.

This report provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles (e.g., model architectures, training procedures, data, systems, security, evaluation, theory) to their applications (e.g., law, healthcare, education) and societal impact (e.g., inequity, misuse, economic and environmental impact, legal and ethical considerations).

Though foundation models are based on conventional deep learning and transfer learning, their scale results in new emergent capabilities, and their effectiveness across so many tasks incentivizes homogenization.

Homogenization provides powerful leverage but demands caution, as the defects of the foundation model are inherited by all the adapted models downstream.

Despite the impending widespread deployment of foundation models, we currently lack a clear understanding of how they work, when they fail, and what they are even capable of due to their emergent properties.

To tackle these questions, we believe much of the critical research on foundation models will require deep interdisciplinary collaboration commensurate with their fundamentally sociotechnical nature.

## Original Full Paper (211 pages)

https://arxiv.org/pdf/2108.07258.pdf.

Thanks to Folkstone Design Inc.

#AI #ML #FoundationModels #ComputerAndSociety #SocialImlications.

Photo by yuyeung lau on unsplash.

https://arxiv.org/abs/2108.07258?sf149288348=1

🥲👍


Val Kilmer marked the release of his acclaimed documentary “Val” (now streaming on Amazon Prime Video) in a milestone way: He recreated his old speaking voice by feeding hours of recorded audio of himself into an artificial intelligence algorithm. Kilmer lost the ability to speak after undergoing throat cancer treatment in 2014. Kilmer’s team recently joined forces with software company Sonantic and “Val” distributor Amazon to “create an emotional and lifelike model of his old speaking voice” (via The Wrap).

“I’m grateful to the entire team at Sonantic who masterfully restored my voice in a way I’ve never imagined possible,” Val Kilmer said in a statement. “As human beings, the ability to communicate is the core of our existence and the side effects from throat cancer have made it difficult for others to understand me. The chance to narrate my story, in a voice that feels authentic and familiar, is an incredibly special gift.”

Quick – define common sense

Despite being both universal and essential to how humans understand the world around them and learn, common sense has defied a single precise definition. G. K. Chesterton, an English philosopher and theologian, famously wrote at the turn of the 20th century that “common sense is a wild thing, savage, and beyond rules.” Modern definitions today agree that, at minimum, it is a natural, rather than formally taught, human ability that allows people to navigate daily life.

Common sense is unusually broad and includes not only social abilities, like managing expectations and reasoning about other people’s emotions, but also a naive sense of physics, such as knowing that a heavy rock cannot be safely placed on a flimsy plastic table. Naive, because people know such things despite not consciously working through physics equations.

An artificial neural network (AI) designed by an international team involving UCL can translate raw data from brain activity, paving the way for new discoveries and a closer integration between technology and the brain.

The new method could accelerate discoveries of how brain activities relate to behaviors.

The study published today in eLife, co-led by the Kavli Institute for Systems Neuroscience in Trondheim and the Max Planck Institute for Human Cognitive and Brain Sciences Leipzig and funded by Wellcome and the European Research Council, shows that a , a specific type of deep learning , is able to decode many different behaviors and stimuli from a wide variety of brain regions in different species, including humans.

It’s impressive. But, i don’t see it doing anything that it hasn’t done before. The next step Has To Be equipping it with Human Level hands that can be teleoperated and possibly self operated.


Parkour is the perfect sandbox for the Atlas team at Boston Dynamics to experiment with new behaviors. In this video our humanoid robots demonstrate their whole-body athletics, maintaining its balance through a variety of rapidly changing, high-energy activities. Through jumps, balance beams, and vaults, we demonstrate how we push Atlas to its limits to discover the next generation of mobility, perception, and athletic intelligence.

How does Atlas do parkour? Go behind the scenes in the lab: https://youtu.be/EezdinoG4mk.

Parkour Atlas: https://youtu.be/LikxFZZO2sk.
More Parkour Atlas: https://youtu.be/_sBBaNYex3E

Qualcomm has unveiled the world’s first drone platform and reference design that will tap in both 5G and AI technologies. The chipmaker’s Flight RB5 5G Platform condenses multiple complex technologies into one tightly integrated drone system to support a variety of use cases, including film and entertainment, security and emergency response, delivery, defense, inspection, and mapping.

The Flight RB5 5G Platform is powered by the chipmaker’s QRB5165 processor and builds upon the company’s latest IoT offerings to offer high-performance and heterogeneous computing at ultra-low power consumption.

Bottom line: Boston Dynamics’ Atlas robots may be under new ownership, but they haven’t lost any of their old tricks. The robotics design company has shared a new video featuring its agile uprights tackling an obstacle course. If you haven’t seen what these humanoid bots are capable of lately, it’s certainly worth a look.

They’ve ditched their tethers, aren’t annoying loud like they once were and exhibit very fluid movement. Aside from a couple of minor hiccups, the run was mostly flawless.

It’s even more impressive when you realize that the bots are adapting to their environment on the fly; none of their movements are “pre-programmed.”