Toggle light / dark theme

✅ Instagram: https://www.instagram.com/pro_robots.

You are on the PRO Robots channel and in this video we will talk about artificial intelligence. Repeating brain structure, mutual understanding and mutual assistance, self-learning and rethinking of biological life forms, replacing people in various jobs and cheating. What have neural networks learned lately? All new skills and superpowers of artificial intelligence-based systems in one video!

0:00 In this video.
0:26 Isomorphic Labs.
1:14 Artificial intelligence trains robots.
2:01 MIT researchers’ algorithm teaches robots social skills.
2:45 AI adopts brain structure.
3:28 Revealing cause and effect relationships.
4:40 Miami Herald replaces fired journalist with bot.
5:26 Nvidia unveiled a neural network that creates animated 3D face models based on voice.
5:55 Sber presented code generation model based on ruGPT-3 neural network.
6:50 ruDALL-E multimodal neural network.
7:16 Cristofari Neo supercomputer for neural network training.

#prorobots #robots #robot #future technologies #robotics.

More interesting and useful content:

✅ Elon Musk Innovation https://www.youtube.com/playlist?list=PLcyYMmVvkTuQ-8LO6CwGWbSCpWI2jJqCQ
✅Future Technologies Reviews https://www.youtube.com/playlist?list=PLcyYMmVvkTuTgL98RdT8-z-9a2CGeoBQF
✅ Technology news.

Reading the time on an analogue clock is surprisingly difficult for computers, but artificial intelligence can now do so accurately using a method that had previously proved tricky to deploy.

Computer vision has long been able to read the time from digital clocks by simply looking at the numbers on the screen. But analogue clocks are much more challenging because of factors including variation in their design and the way shadows and reflections can obscure the hands.

It could replace cartilage in knees and even help create soft robots 🤯


Is it a bird? Is it a plane? No, it’s ‘super jelly’ — a bizarre new material that can survive being run over by a car even though it’s composed of 80 per cent water.

The ‘glass-like hydrogel’ may look and feel like a squishy jelly, but when compressed it acts like shatterproof glass, its University of Cambridge developers said.

It is formed using a network of polymers held together by a series of reversible chemical interactions that can be tailored to control the gel’s mechanical properties.

I continue to introduce you to a series of articles on the nature of human intelligence and the future of artificial intelligence systems. In the previous article “Artificial intelligence vs neurophysiology: Why the difference matters” we found out that the basis of the work of any biological nervous system is not a computational function (like in a computer), but a reflex or a prepared answer.

But how then did our intelligence come about? How did a biological system repeating pre-prepared reactions become a powerful creative machine?

In this article, we will answer this question in the language of facts. Creating our intelligence, nature has found a simple and at the same time ingenious solution, which is not devoid of a great mystery, which we will also touch.

Text-to-photorealism.


Nvidia deep learning technologies continue to do wonderful and weird things. Just a few weeks ago we saw how the company can use AI to automatically match voice lines to 3D animated faces. This cool kind of tech that can help people create great things with ease, or in the case of Nvidia’s latest unveiling, potentially horrible things, but still with ease.

We first saw Nvidia’s GauGAN a few years back. It was demonstrated to turn basic doodles into photorealistic images with a click of a few buttons. It’s pretty neat stuff, and definitely worth playing with. Now GuaGAN2 is out and it doesn’t even need your sketches to make highly detailed landscape images.

See customers interact with a second example of Project Tokkio, a Maxine-powered AI talking kiosk. This reference application leverages NVIDIA Metropolis vision AI and Riva speech #AI technology to communicate with the user. It uses NVIDIA’s Megatron-Turing NLG 530B, a state-of-the-art language model for understanding intent and NVIDIA Merlin to make meaningful recommendations. The 3D avatar is animated and visualized with NVIDIA Omniverse to deliver a visually stunning experience—all in real time.

It’s almost Time to use our AI Brothers to search for and Welcome our Space Brothers. Welcome AI and Space friends.


The best public policy is shaped by scientific evidence. Although obvious in retrospect, scientists often fail to follow this dictum. The refusal to admit anomalies as evidence that our knowledge base may have missed something important about reality stems from our ego. However, what will happen when artificial intelligence plays a starring role in the analysis of data? Will these future ‘AI-scientists’ alter the way information is processed and understood, all without human bias?

The mainstream of physics routinely embarks on speculations. For example, we invested 7.5 billion Euros in the Large Hadron Collider with the hope of finding Supersymmetry 0, without success. We invested hundreds of millions of dollars in the search for Weakly Interacting Massive Particles (WIMPs) as dark matter 0, and four decades later, we have been unsuccessful. In retrospect, these were searches in the dark. But one wonders why they were endorsed by the mainstream scientific community while less speculative searches are not?

Consider, for example, the search for equipment in space from extraterrestrial civilizations. Our own civilization launched five interstellar probes. Moreover, the Kepler satellite data revealed that a substantial fraction of all Sun-like stars have an Earth-sized planet at the same separation. Given that most stars formed billions of years before the Sun, imagining numerous extraterrestrial probes floating in interstellar space should not be regarded as more speculative than the notions of Supersymmetry or WIMPs.

Architecture and construction have always been, rather quietly, at the bleeding edge of tech and materials trends. It’s no surprise, then, especially at a renowned technical university like ETH Zurich, to find a project utilizing AI and robotics in a new approach to these arts. The automated design and construction they are experimenting with show how homes and offices might be built a decade from now.

The project is a sort of huge sculptural planter, “hanging gardens” inspired by the legendary structures in the ancient city of Babylon. (Incidentally, it was my ancestor, Robert Koldewey, who excavated/looted the famous Ishtar Gate to the place.)

Begun in 2019, Semiramis (named after the queen of Babylon back then) is a collaboration between human and AI designers. The general idea of course came from the creative minds of its creators, architecture professors Fabio Gramazio and Matthias Kohler. But the design was achieved by putting the basic requirements, such as size, the necessity of watering and the style of construction, through a set of computer models and machine learning algorithms.