Toggle light / dark theme

In Brief

  • Jürgen Schmidhuber asserts that, by 2050, there will be trillions of self-replicating robot factories on the asteroid belt.
  • In a few million years, robots will naturally explore the galaxy out of curiosity, setting their own goals without much human interaction.

Read more

Computers are getting smarter, but first they’re stuck in some sort of uncanny valley of intelligence, reassembling normal, everyday objects into increasingly creepy combinations. First came the revelations of Google’s DeepDream technology, which, in learning to “see” objects, “saw” creepy multi-eyed organisms all over the place, turning the world into a half-sentient dog-like mess.

Now, researchers in Toronto have used a technology called “neural karaoke” to teach a computer to write a song after looking at a photo, and the little carol it penned after viewing a festive Christmas tree is an absolutely horrifying display of what these things think of us.

Read more

Big corporations prefer robots to human employees.


It’s a sign of things to come.

In the last five years, online shopping has produced tens of thousands of new warehouse jobs in California, many of them in Riverside and San Bernardino counties. The bulk of them paid blue collar people decent wages to do menial tasks – putting things in boxes and sending them out to the world.

But automated machines and software have been taking up more and more space in the region’s warehouses, and taking over jobs that were once done by humans. Today, fewer jobs are being added, though some of them pay more.

Read more

Step inside the portal and everything is white, calm, silent: this is where researchers are helping craft the future of virtual reality. I speak out loud, and my voice echoes around the empty space. In place of the clutter on the outside, each panel is unadorned, save for a series of small black spots: cameras recording your every move. There are 480 VGA cameras and 30 HD cameras, as well as 10 RGB-D depth sensors borrowed from Xbox gaming consoles. The massive collection of recording apparatus is synced together, and its collective output is combined into a single, digital file. One minute of recording amounts to 600GB of data.

The hundreds of cameras record people talking, bartering, and playing games. Imagine the motion-capture systems used by Hollywood filmmakers, but on steroids. The footage it records captures a stunningly accurate three-dimensional representation of people’s bodies in motion, from the bend in an elbow to a wrinkle in your brow. The lab is trying to map the language of our bodies, the signals and social cues we send one another with our hands, posture, and gaze. It is building a database that aims to decipher the constant, unspoken communication we all use without thinking, what the early 20th century anthropologist Edward Sapir once called an “elaborate code that is written nowhere, known to no one, and understood by all.”

The original goal of the Panoptic Studio was to use this understanding of body language to improve the way robots relate to human beings, to make them more natural partners at work or in play. But the research being done here has recently found another purpose. What works for making robots more lifelike and social could also be applied to virtual characters. That’s why this basement lab caught the attention of one of the biggest players in virtual reality: Facebook. In April 2015, the Silicon Valley giant hired Yaser Sheikh, an associate professor at Carnegie Mellon and director of the Panoptic Studio, to assist in research to improve social interaction in VR.

Read more

With BMI technology, cell circuitry, etc. this is no surprise.


Are you scared of artificial intelligence (AI)?

Do you believe the warnings from folks like Prof. Stephen Hawking, Elon Musk and others?

Is AI the greatest tool humanity will ever create, or are we “summoning the demon”?

To quote the head of AI at Singularity University, Neil Jacobstein, “It’s not artificial intelligence I’m worried about, it’s human stupidity.”

Read more

One of the oddest military drones aborning reinvents a stillborn technology from 1951. That’s because the unmanned aircraft revolution is resurrecting configurations that were tried more than a half century ago but proved impractical with a human pilot inside. The case in point: Northrop Grumman’s new Tern, a drone designed to do everything armed MQ-1 Predators or MQ-9 Reapers can, but to do it flying from small ships or rugged scraps of land – i.e., no runway needed.

“No one has flown a large, unmanned tailsitter before,” Brad Tousley, director of the Tactical Technology Office at the Defense Advanced Research Projects Agency (DARPA), Tern’s primary funder, said in a news release. The key word there is “unmanned.”

Back in 1951, when all sorts of vertical takeoff and landing aircraft ideas were being tried, Convair and Lockheed built experimental manned tailsitters for the Navy. Convair’s XFY-1 and Lockheed’s XFV-1, nicknamed “Pogo” and “Pogo Stick,” each had two counter-rotating propellers on its nose and was to take off and land pointing straight up. Convair’s Pogo had a delta wing and, at right angles to the wing, large fins. Lockheed’s Pogo Stick had an X-shaped tail whose trailing tips, like Convair’s wing and fins, sported landing gear.

Read more