Toggle light / dark theme

Back in July, OpenAI’s latest language model, GPT-3, dazzled with its ability to churn out paragraphs that look as if they could have been written by a human. People started showing off how GPT-3 could also autocomplete code or fill in blanks in spreadsheets.

In one example, Twitter employee Paul Katsen tweeted “the spreadsheet function to rule them all,” in which GPT-3 fills out columns by itself, pulling in data for US states: the population of Michigan is 10.3 million, Alaska became a state in 1906, and so on.

Except that GPT-3 can be a bit of a bullshitter. The population of Michigan has never been 10.3 million, and Alaska became a state in 1959.

DeepMind today announced a new milestone for its artificial intelligence agents trained to play the Blizzard Entertainment game StarCraft II. The Google-owned AI lab’s more sophisticated software, still called AlphaStar, is now grandmaster level in the real-time strategy game, capable of besting 99.8 percent of all human players in competition. The findings are to be published in a research paper in the scientific journal Nature.

Not only that, but DeepMind says it also evened the playing field when testing the new and improved AlphaStar against human opponents who opted into online competitions this past summer. For one, it trained AlphaStar to use all three of the game’s playable races, adding to the complexity of the game at the upper echelons of pro play. It also limited AlphaStar to only viewing the portion of the map a human would see and restricted the number of mouse clicks it could register to 22 non-duplicated actions every five seconds of play, to align it with standard human movement.

MIT’s Cheetah 3 robot can now leap and gallop across rough terrain, climb a staircase littered with debris, and quickly recover its balance when suddenly yanked or shoved, all while essentially blind.

Watch more videos from MIT: http://www.youtube.com/user/MITNewsOffice?sub_confirmation=1

The Massachusetts Institute of Technology is an independent, coeducational, privately endowed university in Cambridge, Massachusetts. Our mission is to advance knowledge; to educate students in science, engineering, and technology; and to tackle the most pressing problems facing the world today. We are a community of hands-on problem-solvers in love with fundamental science and eager to make the world a better place.
The MIT YouTube channel features videos about all types of MIT research, including the robot cheetah, LIGO, gravitational waves, mathematics, and bombardier beetles, as well as videos on origami, time capsules, and other aspects of life and culture on the MIT campus. Our goal is to open the doors of MIT and bring the Institute to the world through video.

Circa 2018


SPIDERS often make people jump but a bunch of clever scientists have managed to train one to jump on demand.

Researchers managed to teach the spider – nicknamed Kim – to jump from different heights and distances so they could film the arachnid’s super-springy movements.

The study is part of a research programme by the University of Manchester which aims to create a new class of micro-robots agile enough to jump like acrobatic spiders.

November 2019 is a landmark month in the history of the future. That’s when humanoid robots that are indistinguishable from people start running amok in Los Angeles. Well, at least they do in the seminal sci-fi film “Blade Runner.” Thirty-seven years after its release, we don’t have murderous androids running around. But we do have androids like Hanson Robotics’ Sophia, and they could soon start working in jobs traditionally performed by people.

Russian start-up Promobot recently unveiled what it calls the world’s first autonomous android. It closely resembles a real person and can serve in a business capacity. Robo-C can be made to look like anyone, so it’s like an android clone. It comes with an artificial intelligence system that has more than 100,000 speech modules, according to the company. It can operate at home, acting as a companion robot and reading out the news or managing smart appliances — basically, an anthropomorphic smart speaker. It can also perform workplace tasks such as answering customer questions in places like offices, airports, banks and museums, while accepting payments and performing other functions.

“We analyzed the needs of our customers, and there was a demand,” says Promobot co-founder and development director Oleg Kivokurtsev. “But, of course, we started the development of an anthropomorphic robot a long time ago, since in robotics there is the concept of the ‘Uncanny Valley,’ and the most positive perception of the robot arises when it looks like a person. Now we have more than 10 orders from companies and private clients from around the world.”

Artificial Intelligence (AI) is one of the most powerful technologies ever developed, but it’s not nearly as new as you might think. In fact, it’s undergone several evolutions since its inception in the 1950s. The first generation of AI was ‘descriptive analytics,’ which answers the question, “What happened?” The second, ‘diagnostic analytics,’ addresses, “Why did it happen?” The third and current generation is ‘predictive analytics,’ which answers the question, “Based on what has already happened, what could happen in the future?”

While predictive analytics can be very helpful and save time for data scientists, it is still fully dependent on historic data. Data scientists are therefore left helpless when faced with new, unknown scenarios. In order to have true “artificial intelligence,” we need machines that can “think” on their own, especially when faced with an unfamiliar situation. We need AI that can not just analyze the data it is shown, but express a “gut feeling” when something doesn’t add up. In short, we need AI that can mimic human intuition. Thankfully, we have it.

Please have a listen to Episode 14 of Cosmic Controversy with guest Julie Castillo, NASA’s Dawn mission project scientist. We spend much of the episode discussing the beguiling dwarf planet Ceres and the need for a sample return mission.


This week’s guest is NASA Dawn project scientist Julie Castillo-Rogez who led the hugely successful robotic mission on the first in-depth look at the asteroid Vesta and the dwarf planet Ceres. Castillo talks about why there’s a growing consensus that Ceres may have long had habitable subsurface conditions and why we need a sample return mission to launch in 2033. We also discuss Mars’ moons of Deimos and Phobos and the first interstellar asteroid, Oumuamua.

From the understated opulence of a Bentley to the stalwart family minivan to the utilitarian pickup, Americans know that the car you drive is an outward statement of personality. You are what you drive, as the saying goes, and researchers at Stanford have just taken that maxim to a new level.

Using computer algorithms that can see and learn, they have analyzed millions of publicly available images on Google Street View. The researchers say they can use that knowledge to determine the political leanings of a given neighborhood just by looking at the cars on the streets.

“Using easily obtainable visual data, we can learn so much about our communities, on par with some information that takes billions of dollars to obtain via census surveys. More importantly, this research opens up more possibilities of virtually continuous study of our society using sometimes cheaply available visual data,” said Fei-Fei Li, an associate professor of computer science at Stanford and director of the Stanford Artificial Intelligence Lab and the Stanford Vision Lab, where the work was done.