Toggle light / dark theme

Non-verbal social cues are key.

Robots are increasingly becoming common in everyday life but their communications skills still lag far behind. One key attribute that might really help robot-human interactions is if robots could learn to read and respond to human emotional cues.

In that case, they would be able to interfere when they are really needed and not disturb the rest of the time. Now, researchers at Franklin & Marshall College have been working on allowing socially assistive robots to process social cues given by humans and respond to them accordingly.

AI is a classic double-edged sword in much the same way as other major technologies have been since the start of the Industrial Revolution. Burning carbon drives the industrial world but leads to global warming. Nuclear fission provides cheap and abundant electricity though could be used to destroy us. The Internet boosts commerce and provides ready access to nearly infinite amounts of useful information, yet also offers an easy path for misinformation that undermines trust and threatens democracy. AI finds patterns in enormous and complex datasets to solve problems that people cannot, though it often reinforces inherent biases and is being used to build weapons where life and death decisions could be automated. The danger associated with this dichotomy is best described by sociobiologist E.O. Wilson at a Harvard debate, where he said “The real problem of humanity is the following: We have paleolithic emotions; medieval institutions; and God-like technology.”

Full Story:


There is a lot more than the usual amount of handwringing over AI these days. Former Google CEO Eric Schmidt and former US Secretary of State and National Security Advisor Henry Kissinger put out a new book last week warning of AI’s dangers. Fresh AI warnings have also been issued by professors Stuart Russell (UC Berkeley) and Youval Harari (University of Jerusalem). Op-eds from the editorial board at the Guardian and Maureen Dowd at the New York Times have amplified these concerns. Facebook — now rebranded as Meta — has come under growing pressure for its algorithms creating social toxicity, but it is hardly alone. The White House has called for an AI Bill of Rights, and the Financial Times argues this should extend globally. Worries over AI are flying faster than a gale force wind.

When comparing Meta — formerly Facebook — and Microsoft’s approaches to the metaverse, it’s clear Microsoft has a much more grounded and realistic vision. Although Meta currently leads in the provision of virtual reality (VR) devices (through its ownership of what was previously called Oculus), Microsoft is adapting technologies that are currently more widely used. The small, steady steps Microsoft is making today put it in a better position to be one of the metaverse’s future leaders. However, such a position comes with responsibilities, and Microsoft needs to be prepared to face them.

The metaverse is a virtual world where users can share experiences and interact in real-time within simulated scenarios. To be clear, no one knows yet what it will end up looking like, what hardware it will use, or which companies will be the main players — these are still early days. However, what is certain is that VR will play a key enabling role; VR-related technologies such as simultaneous location and mapping (SLAM), facial recognition, and motion tracking will be vital for developing metaverse-based use cases.

Full Story:

Google’s own Deepmind has just released a new revolutionary Artificial Intelligence model which will likely power many AI Applications in the future. They named it Perceiver and it’s meant to replace the most popular AI transformer models.

Perceiver is meant to bring a more general approach to problems from AI models which some would call Artificial General Intelligence. Whether or not Deepmind will manage to create the best AI in the future is yet to be seen. But one thing is for sure, the future is quite weird for AI but also amazing to see.

If you enjoyed this video, please consider rating this video and subscribing to our channel for more frequent uploads. Thank you! smile

TIMESTAMPS:
00:00 A new kind of AI
01:46 What this new model does.
03:40 How Perceiver works.
05:40 Problems with Perceiver.
06:28 Last words.

#deepmind #ai #agi

The most intelligent AI Scientists in the world are becoming increasingly worried about Artificial Intelligence programs becoming more unpredictable and incomprehensible as they become more powerful. AI is also overtaking powerful positions in the government, healthcare and defense which could prove dangerous as an Artificial Super Intelligence is coming very close as the Singularity approaches in the future of 2045. People like Elon Musk and Ray Kurzweil have long warned us about AI beating Humans in anything we can imagine. Nvidia and Meta are also working on specially made hardware and software in the form of pytorch and 2022 GPU’s. Artificial General Intelligence is a real dangers and here are some solutions to it.

TIMESTAMPS:
00:00 The Dawn of incomprehensible AI
01:33 The Dangers of Artificial Intelligence.
03:03 A Possible solution.
04:23 What ASI means for Society.
07:15 So, is all hope lost?
09:03 Last Words.

#ai #asi #agi

Summary: A newly developed AI algorithm can directly predict eye position and movement during an MRI scan. The technology could provide new diagnostics for neurological disorders that manifest in changes in eye-movement patterns.

Source: Max Planck Institute.

A large amount of information constantly flows into our brain via the eyes. Scientists can measure the resulting brain activity using magnetic resonance imaging (MRI). The precise measurement of eye movements during an MRI scan can tell scientists a great deal about our thoughts, memories and current goals, but also about diseases of the brain.

The person staring back from the computer screen may not actually exist, thanks to artificial intelligence (AI) capable of generating convincing but ultimately fake images of human faces. Now this same technology may power the next wave of innovations in materials design, according to Penn State scientists.

“We hear a lot about deepfakes in the news today – AI that can generate realistic images of human faces that don’t correspond to real people,” said Wesley Reinhart, assistant professor of materials science and engineering and Institute for Computational and Data Sciences faculty co-hire, at Penn State. “That’s exactly the same technology we used in our research. We’re basically just swapping out this example of images of human faces for elemental compositions of high-performance alloys.”

The scientists trained a generative adversarial network (GAN) to create novel refractory high-entropy alloys, materials that can withstand ultra-high temperatures while maintaining their strength and that are used in technology from turbine blades to rockets.

Landing AI, a California-based startup led by Google Brain co-founder Andrew Ng, has just nabbed $57 million in series A for its computer vision platform.

Landing AI’s flagship product, the LandingLens, doesn’t have the highlights you see at Google I/O or the Apple Event, where tech giants introduce how the latest advances in AI are making your personal devices smarter and useful. But its impact could be no less significant than the kind of artificial intelligence technology that is finding its way into consumer products and services.

Landing AI is one of several companies that is bringing computer vision to the industrial sector. As industrial computer vision platforms mature, they can bring great productivity, cost-efficiency, and safety to different domains.