Toggle light / dark theme

A group of five companies including the Japanese unit of IBM Corp are currently developing an artificial intelligence suitcase to assist visually impaired people in traveling independently, with a pilot test of a prototype conducted at an airport in Japan earlier this month.

The small navigation robot, which is able to plan an optimal route to a destination based on the user’s location and map data, uses multiple sensors to assess its surroundings and AI functionality to avoid bumping into obstacles, according to the companies.

At the pilot experiment held on Nov 2, the AI suitcase was able to successfully navigate itself to an All Nippon Airways departure counter after receiving a command from Chieko Asakawa, a visually impaired fellow IBM overseeing the product’s development.

This article was published as a part of the Data Science Blogathon.

Introduction

Computer Vision is evolving from the emerging stage and the result is incredibly useful in various applications. It is in our mobile phone cameras which are able to recognize faces. It is available in self-driving cars to recognize traffic signals, signs, and pedestrians. Also, it is in industrial robots to monitor problems and navigating around co-workers.

You’ve probably heard us say this countless times: GPT-3, the gargantuan AI that spews uncannily human-like language, is a marvel. It’s also largely a mirage. You can tell with a simple trick: Ask it the color of sheep, and it will suggest “black” as often as “white”—reflecting the phrase “black sheep” in our vernacular.

That’s the problem with language models: because they’re only trained on text, they lack common sense. Now researchers from the University of North Carolina, Chapel Hill, have designed a new technique to change that. They call it “vokenization,” and it gives language models like GPT-3 the ability to “see.”

It’s not the first time people have sought to combine language models with computer vision. This is actually a rapidly growing area of AI research. The idea is that both types of AI have different strengths. Language models like GPT-3 are trained through unsupervised learning, which requires no manual data labeling, making them easy to scale. Image models like object recognition systems, by contrast, learn more directly from reality. In other words, their understanding doesn’t rely on the kind of abstraction of the world that text provides. They can “see” from pictures of sheep that they are in fact white.

Recently, a team of researchers from Facebook AI and Tel Aviv University proposed an AI system that solves the multiple-choice intelligence test, Raven’s Progressive Matrices. The proposed AI system is a neural network model that combines multiple advances in generative models, including employing multiple pathways through the same network.

Raven’s Progressive Matrices, also known as Raven’s Matrices, are multiple-choice intelligence tests. The test is used to measure abstract reasoning and is regarded as a non-verbal estimate of fluid intelligence.

In this test, a person tries to finish the missing location in a 3X3 grid of abstract images. According to the researchers, there have been various similar researches, where the main focus entirely on choosing the right answer out of the various choices. However, in this research, the researchers focussed on generating a correct answer given the grid, without seeing the choices.

If Facebook’s AI research objectives are successful, it may not be long before home assistants take on a whole new range of capabilities. Last week the company announced new work focused on advancing what it calls “embodied AI”: basically, a smart robot that will be able to move around your house to help you remember things, find things, and maybe even do things.

Robots That Hear, Home Assistants That See

In Facebook’s blog post about audio-visual navigation for embodied AI, the authors point out that most of today’s robots are “deaf”; they move through spaces based purely on visual perception. The company’s new research aims to train AI using both visual and audio data, letting smart robots detect and follow objects that make noise as well as use sounds to understand a physical space.

Intel, in its latest acquisition spree, has acquired Israel-based Cnvrg.io. The deal, like most of the deals in the past, is aimed at strengthening its machine learning and AI operations. The 2016-founded startup provides a platform for data scientists to build and run machine learning models that can be used to train, run comparisons and recommendations, among others. Co-founded by Yochay Ettun and Leah Forkosh Kolben, Cnvrg was valued at around $17 million in its last round.

According to a statement by Intel spokesperson, Cnvrg will be an independent Intel company and will continue to serve its existing and future customers after the acquisition. However, there is no information on the financial terms of the deal or who will join Intel from the startup.

The deal comes merely a week after Intel’s announcement of acquiring San Francisco-based software optimisation startup SigOpt, which it did to leverage SigOpt’s technologies across its products to accelerate, amplify and scale AI software tools. SigOpt’s software technologies combined with Intel hardware products could give it a major competitive advantage providing differentiated value for data scientists and developers.

Japanese scientists have created a device that allows anyone to control a mini toy Gundam robot, one of anime’s most popular fictional battle robots, with their mind.

The researchers customized a mobile suit Zaku Gundam robot toy available through Bandai’s Zeonic Technics line that allows buyers to manually program their robot using a smartphone app.

For the mind-controlled prototype, researchers for NeU, a joint venture between Tohoku University and Hitachi, developed a version that moves in response to brain activity.

Japanese researches control Gundam robot using their mind.


Japanese scientists have created a device that controls a mini toy Gundam robot using the human mind, turning one of the anime’s most exciting technological concepts into reality.

The researchers customized a Zaku Gundam robot toy available through Bandai’s Zeonic Technics, but buyers have to manually program their robot using a smartphone app.

The prototype is a joint venture between Tohoku University and Hitachi, where they developed a version that moves in response to brain activity.