Toggle light / dark theme

As others have pointed out, voxel-based games have been around for a long time; a recent example is the whimsical “3D Dot Game Hero” for PS3, in which they use the low-res nature of the voxel world as a fun design element.

Voxel-based approaches have huge advantages (“infinite” detail, background details that are deformable at the pixel level, simpler simulation of particle-based phenomena like flowing water, etc.) but they’ll only win once computing power reaches an important crossover point. That point is where rendering an organic world a voxel at a time looks better than rendering zillions of polygons to approximate an organic world. Furthermore, much of the effort that’s gone into visually simulating real-world phenomena (read the last 30 years of Siggraph conference proceedings) will mostly have to be reapplied to voxel rendering. Simply put: lighting, caustics, organic elements like human faces and hair, etc. will have to be “figured out all over again” for the new era of voxel engines. It will therefore likely take a while for voxel approaches to produce results that look as good, even once the crossover point of level of detail is reached.

I don’t mean to take anything away from the hard and impressive coding work this team has done, but if they had more academic background, they’d know that much of what they’ve “pioneered” has been studied in tremendous detail for two decades. Hanan Samet’s treatise on the subject tells you absolutely everything you need to know, and more: (http://www.amazon.com/Foundations-Multidimensional-Structures-Kaufmann-Computer/dp/0123694469/ref=sr_1_1?ie=UTF8&qid=1322140227&sr=8-1) and even goes into detail about the application of these spatial data structures to other areas like machine learning. Ultimately, Samet’s book is all about the “curse of dimensionality” and how (and how much) data structures can help address it.

If robots are to help out in places like hospitals and phone repair shops, they’re going to need a light touch. And what’s lighter than not touching at all? Researchers have created a gripper that uses ultrasonics to suspend an object in midair, potentially making it suitable for the most delicate tasks.

It’s done with an array of tiny speakers that emit sound at very carefully controlled frequencies and volumes. These produce a sort of standing pressure wave that can hold an object up or, if the pressure is coming from multiple directions, hold it in place or move it around.

This kind of “acoustic levitation,” as it’s called, is not exactly new — we see it being used as a trick here and there, but so far there have been no obvious practical applications. Marcel Schuck and his team at ETH Zürich, however, show that a portable such device could easily find a place in processes where tiny objects must be very lightly held.

Human skin is a fascinating multifunctional organ with unique properties originating from its flexible and compliant nature. It allows for interfacing with external physical environment through numerous receptors interconnected with the nervous system. Scientists have been trying to transfer these features to artificial skin for a long time, aiming at robotic applications.

Robotic systems heavily rely on electronic and magnetic field sensing functionalities required for positioning and orientation in space. Much research has been devoted to implementation of these functionalities in a flexible, compliant form. Recent advancements in flexible sensors and organic electronics have provided important prerequisites. These devices can operate on soft and elastic surfaces, whereas sensors perceive various physical properties and transmit them via readout circuits.

To closely replicate natural skin, it is necessary to interconnect a large number of individual sensors. This challenging task became a major obstacle in realizing electronic skin. First demonstrations were based on an array of individual sensors addressed separately, which unavoidably resulted in a tremendous number of electronic connections. In order to reduce the necessary wiring, important technology had to be developed—namely, complex electronic circuits, current sources and switches had to be combined with individual magnetic sensors to achieve fully integrated devices.

“The thing I find rewarding about coding: You’re literally creating something out of nothing. You’re kind of like a wizard.”


When the smiley-faced robot tells two boys to pick out the drawing of an ear from three choices, one of the boys, about 5, touches his nose. “No. Ear,” his teacher says, a note of frustration in her voice. The child picks up the drawing of an ear and hands it to the other boy, who shows it to the robot. “Yes, that is the ear,” the ever-patient robot says. “Good job.” The boys smile as the teacher pats the first boy in congratulations.

The robot is powered by technology created by Movia Robotics, founded by Tim Gifford in 2010 and headquartered in Bristol, Connecticut. Unlike other companies that have made robots intended to work with children with autism spectrum disorder (ASD), such Beatbots, Movia focuses on building and integrating software that can work with a number of humanoid robots, such as the Nao. Movia has robots in three school districts in Connecticut. Through a U.S. Department of Defense contract, they’re being added to 60 schools for the children of military personnel worldwide.

It’s Gifford’s former computer science graduate student, Christian Wanamaker, who programs the robots. Before graduate school at the University of Connecticut, Wanamaker used his computer science degree to program commercial kitchen fryolators. He enjoys a crispy fry as much as anyone, but his work coding for robot-assisted therapy is much more challenging, interesting and rewarding, he says.

Circa 2016


Taking vertical urban indoor farming efficiency to the next level, a new automated plant coming to Japan will be staffed entirely by robots and produce 30,000 heads of lettuce daily.

spread indoor farm

The so-called Vegetable Factory is a project of Spread, a Japanese company already operating vertical farms. Located in Kyoto, its small army of bots will various seed, water, trim and harvest the lettuce. Spread’s new automation technology will not only produce more lettuce, it will also reduce labor costs by 50%, cut energy use by 30%, and recycle 98% of water needed to grow the crops.

The hype about artificial intelligence is unavoidable. From Beijing to Seattle, companies are investing vast sums into these data-hungry systems in the belief that they will profoundly transform the business landscape. The stories in this special report will deepen your understanding of a technology that may reshape our world.


© 2019 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy (Your California Privacy Rights).

Fortune may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.

Quotes delayed at least 15 minutes. Market data provided by Interactive Data. ETF and Mutual Fund data provided by Morningstar, Inc. Dow Jones Terms & Conditions: http://www.djindexes.com/mdsidx/html/tandc/indexestandcs.html.