Toggle light / dark theme

More than a score of companies are pushing to be early winners in the race for self-driving taxis — robotaxis — with the potential that brings to capture the entire value chain of car transport from your riders. They are all at different stages, and they almost all want to convince the public and investors that they are far along.

To really know how far along a project is, you need the chance to look inside it. To see the data only insiders see on just how well their vehicle is performing, as well as what it can and can’t do. Most teams want to keep those inside details secret, though in time they will need to reveal them to convince the public, and eventually regulators that they are ready to deploy.

Because they keep them secret, those of us looking in from the outside can only scrape for clues. The biggest clues come when they reach certain milestones, and when they take risks which tell us their own internal math has said it’s OK to take that risk. Most teams announce successes and release videos of drives, but these offer us only limited information because they can be cherry picked. The best indicators are what they do, not what they say.

A new “common-sense” approach to computer vision enables artificial intelligence that interprets scenes more accurately than other systems do.

Computer vision systems sometimes make inferences about a scene that fly in the face of common sense. For example, if a robot were processing a scene of a dinner table, it might completely ignore a bowl that is visible to any human observer, estimate that a plate is floating above the table, or misperceive a fork to be penetrating a bowl rather than leaning against it.

Move that computer vision system to a self-driving car and the stakes become much higher — for example, such systems have failed to detect emergency vehicles and pedestrians crossing the street.

✅ Instagram: https://www.instagram.com/pro_robots.

You’re on the PRO Robots channel and in this issue, on the eve of the New Year and Christmas, we’ve made a selection of non-trivial gifts for you. From high-tech, to simple but useful! See Top robots and gadgets you can buy right now for fun, usefulness, or to feel like you’re in a futuristic movie of the future. Have you started picking out presents for the New Year yet?

0:00 In this issue.
0:23 Robot vacuum cleaner ROIDMI EVE Plus.
1:13 CIRO Solar Robot Robot Kit.
1:46 mBot Robotic Constructors by Makeblock.
2:30 Adeept PiCar Pro Robotics Kit.
2:50 Adeept raspclaw Hexapod Robot Spider.
3:10 Copies of Spot and Unity robots.
3:20 Ultrasonic device for phone disinfection.
3:35 Projector for your phone.
3:54 Wireless Record Player.
4:15 Gadgets to Find Lost Things.
4:31 Compact Smart Security Camera.
4:50 Smart Change Jar.
5:11 Face Tracking Phone Holder.
5:27 Smart Garden.
5:53 Smart ring.
6:28 Smart Mug.

#prorobots #robots #robot #future technologies #robotics.

More interesting and useful content:
✅ Elon Musk Innovation https://www.youtube.com/playlist?list=PLcyYMmVvkTuQ-8LO6CwGWbSCpWI2jJqCQ
✅Future Technologies Reviews https://www.youtube.com/playlist?list=PLcyYMmVvkTuTgL98RdT8-z-9a2CGeoBQF
✅ Technology news.

#prorobots #technology #roboticsnews.

Tesla is allowing drivers — yes, the person behind the wheel who is ideally preoccupied with tasks such as “steering” — to play video games on its vehicles’ massive console touchscreens while driving.

“I only did it for like five seconds and then turned it off,” Tesla owner Vince Patton told The New York Times. “I’m astonished. To me, it just seems inherently dangerous.”

The feature has reportedly been available for some time. Given that the company is already facing fierce scrutiny for rolling out its still unfinished Full Self-Driving beta to customers, it’s not exactly a good look.

AI artist Botto has just made over a million dollars for NFTs of its work.

Botto, which works in collaboration with a human community, hit the million dollar mark within five weeks of putting a batch of NFT-backed artwork up for auction, Euronews reported.

The haul elevates the AI artist into a small-but-growing cohort of synthetic creators, and the designer behind Botto, German artist Mario Klingemann, told Euronews that he believes Botto’s practice could eventually extend to books and music.

Alethea AI and BeingAI are collaborating with the Binance NFT marketplace to introduce the AI game characters that are based on nonfungible tokens (NFTs).

Alethea AI creates smart avatars who use AI to hold conversations with people, and it has launched its own NFT collectible AI characters. NFTs use the transparency and security of the digital ledger of blockchain to authenticate unique digital items. The companies see this as the underlying AI infrastructure for iNTFs, or intelligent nonfungible tokens, on the path to the metaverse, the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One.

Being AI, meanwhile, is on a quest to create AI characters who can interact and talk in real time with users. Both companies are working with the NFT marketplace of Binance to launch intelligent IGO (Initial Game Offering), featuring a hundred intelligent NFTs characters.

Humans are pretty good at looking at a single two-dimensional image and understanding the full three-dimensional scene that it captures. Artificial intelligence agents are not.

Yet a machine that needs to interact with objects in the world—like a robot designed to harvest crops or assist with surgery—must be able to infer properties about a 3D from observations of the 2D images it’s trained on.

While scientists have had success using neural networks to infer representations of 3D scenes from images, these machine learning methods aren’t fast enough to make them feasible for many real-world applications.

Games have a long history of serving as a benchmark for progress in artificial intelligence. Recently, approaches using search and learning have shown strong performance across a set of perfect information games, and approaches using game-theoretic reasoning and learning have shown strong performance for specific imperfect information poker variants. We introduce, a general-purpose algorithm that unifies previous approaches, combining guided search, self-play… See more.


Games have a long history of serving as a benchmark for progress in.

Artificial intelligence. Recently, approaches using search and learning have.

Shown strong performance across a set of perfect information games, and.
approaches using game-theoretic reasoning and learning have shown strong.
performance for specific imperfect information poker variants. We introduce.
Player of Games, a general-purpose algorithm that unifies previous approaches.

Combining guided search, self-play learning, and game-theoretic reasoning.
Player of Games is the first algorithm to achieve strong empirical performance.

In large perfect and imperfect information games — an important step towards.

Truly general algorithms for arbitrary environments. We prove that Player of.
Games is sound, converging to perfect play as available computation time and.

Approximation capacity increases.

The idiom “actions speak louder than words” first appeared in print almost 300 years ago. A new study echoes this view, arguing that combining self-supervised and offline reinforcement learning (RL) could lead to a new class of algorithms that understand the world through actions and enable scalable representation learning.

Machine learning (ML) systems have achieved outstanding performance in domains ranging from computer vision to speech recognition and natural language processing, yet still struggle to match the flexibility and generality of human reasoning. This has led ML researchers to search for the “missing ingredient” that might boost these systems’ ability to understand, reason and generalize.

In the paper Understanding the World Through Action, UC Berkeley assistant professor in the department of electrical engineering and computer sciences Sergey Levine suggests that a general, principled, and powerful framework for utilizing unlabelled data could be derived from RL to enable ML systems leveraging large datasets to better understand the real world.