Toggle light / dark theme

The U.S. Food and Drug Administration has for the first time approved a video game for treating attention deficit hyperactivity disorder in children.

The FDA said Monday the game built by Boston-based Akili Interactive Labs can improve attention function.

The game, called EndeavorRx, requires a prescription and is designed for children ages 8 to 12 with certain symptoms of ADHD.

Black holes are the dark remnants of collapsed stars, regions of space cut off from the rest of the universe. If something falls into a black hole, it can never come back out. Not even light can escape, meaning black holes are invisible even with powerful telescopes. Yet physicists know black holes exist because they’re consistent with time-tested theories, and because astronomers have observed how matter behaves just outside a black hole.

Naturally, science fiction loves such an enigmatic entity. Black holes have played starring roles in popular books, movies and television shows, from “Star Trek” and “Doctor Who” to the 2014 blockbuster “Interstellar.”

But black holes aren’t quite as menacing as they are commonly portrayed. “They definitely do not suck,” says Daryl Haggard, an astrophysicist at McGill University in Montreal. “A black hole just sits there, passively. Things can fall onto it, just as meteors can fall to Earth, but it doesn’t pull stuff in.”

DeepMind wowed the research community several years ago by defeating grandmasters in the ancient game of Go, and more recently saw its self-taught agents thrash pros in the video game StarCraft II. Now, the UK-based AI company has delivered another impressive innovation, this time in text-to-speech (TTS).

Text-to-speech (TTS) systems take natural language text as input and produce synthetic human-like speech as their output. The text-to-speech synthesis pipelines are complex, comprising multiple processing stages such as text normalisation, aligned linguistic featurisation, mel-spectrogram synthesis, raw audio waveform synthesis and so on.

Although contemporary TTS systems like those used in digital assistants like Siri boast high-fidelity speech synthesis and wide real-world deployment, even the best of them still have drawbacks. Each stage requires expensive “ground truth” annotations to supervise the outputs, and the systems cannot train directly from characters or phonemes as input to synthesize speech in the end-to-end manner increasingly favoured in other machine learning domains.

Amateur astronauts, private space stations, flying factories, out-of-this-world movie sets — this is the future the space agency is striving to shape as it eases out of low-Earth orbit and aims for the moon and Mars.

It doesn’t quite reach the fantasized heights of George Jetson and Iron Man, but still promises plenty of thrills.

“I’m still waiting for my personal jetpack. But the future is incredibly exciting,” NASA astronaut Kjell Lindgren said the day before SpaceX’s historic liftoff.

A team including researchers from the Department of Chemistry at the University of Tokyo has successfully captured video of single molecules in motion at 1,600 frames per second. This is 100 times faster than previous experiments of this nature. They accomplished this by combining a powerful electron microscope with a highly sensitive camera and advanced image processing. This method could aid many areas of nanoscale research.

When it comes to film and video, the number of images captured or displayed every second is known as the frames per second or fps. If video is captured at high fps but displayed at lower fps, the effect is a smooth slowing down of motion which allows you to perceive otherwise inaccessible details. For reference, films shown at cinemas have usually been displayed at 24 frames per second for well over 100 years. In the last decade or so, special microscopes and cameras have allowed researchers to capture atomic-scale events at about 16 fps. But a new technique has increased this to a staggering 1,600 fps.

Researchers in Australia have achieved a world record internet speed of 44.2 terabits per second, allowing users to download 1,000 HD movies in a single second.

A team from Monash, Swinburne and RMIT universities used a “micro-comb” optical chip containing hundreds of infrared lasers to transfer data across existing communications infrastructure in Melbourne.

Over the last few years, creating fake videos that swap the face of one person onto another using artificial intelligence and machine learning has become a bit of a hobby for a number of enthusiasts online, with the results of these “deepfakes” getting better and better. Today, a new one applies that tech to Star Trek.

Deep Spocks

YouTuber Jarkan has released a number of “Deepfake” videos featuring different actors swapped into iconic film scenes. Today’s release takes Leonard Nimoy’s younger Spock from the original Star Trek and swaps him in for Zachary Quinto’s Spock in the J.J. Abrams 2009 film Star Trek. He does this in a scene where the younger Spock meets his older self, played by Leonard Nimoy. Deepfake swapping of Nimoy in for Quinto or even for Ethan Peck in Discovery has been done before, but this new deepfake has more impressive results.

Eight years ago a machine learning algorithm learned to identify a cat —and it stunned the world. A few years later AI could accurately translate languages and take down world champion Go players. Now, machine learning has begun to excel at complex multiplayer video games like Starcraft and Dota 2 and subtle games like poker. AI, it would appear, is improving fast.

But how fast is fast, and what’s driving the pace? While better computer chips are key, AI research organization OpenAI thinks we should measure the pace of improvement of the actual machine learning algorithms too.

In a blog post and paper —authored by OpenAI’s Danny Hernandez and Tom Brown and published on the arXiv, an open repository for pre-print (or not-yet-peer-reviewed) studies—the researchers say they’ve begun tracking a new measure for machine learning efficiency (that is, doing more with less). Using this measure, they show AI has been getting more efficient at a wicked pace.