Toggle light / dark theme

Researchers at Tufts University School of Engineering have created light-activated composite devices able to execute precise, visible movements and form complex three-dimensional shapes without the need for wires or other actuating materials or energy sources. The design combines programmable photonic crystals with an elastomeric composite that can be engineered at the macro and nano scale to respond to illumination.

The research provides new avenues for the development of smart -driven systems such as high-efficiency, self-aligning solar cells that automatically follow the sun’s direction and angle of light, light-actuated microfluidic valves or soft robots that move with light on demand. A “photonic sunflower,” whose petals curl towards and away from illumination and which tracks the path and angle of the light, demonstrates the technology in a paper that appears March 12th, 2021 in Nature Communications.

Color results from the absorption and reflection of light. Behind every flash of an iridescent butterfly wing or opal gemstone lie complex interactions in which natural photonic crystals embedded in the wing or stone absorb light of specific frequencies and reflect others. The angle at which the light meets the crystalline surface can affect which wavelengths are absorbed and the heat that is generated from that absorbed energy.

Most new achievements in artificial intelligence (AI) require very large neural networks. They consist of hundreds of millions of neurons arranged in several hundred layers, i.e. they have very ‘deep’ network structures. These large, deep neural networks consume a lot of energy in the computer. Those neural networks that are used in image classification (e.g. face and object recognition) are particularly energy-intensive, since they have to send very many numerical values from one neuron layer to the next with great accuracy in each time cycle.

Computer scientist Wolfgang Maass, together with his Ph.D. student Christoph Stöckl, has now found a design method for that paves the way for energy-efficient high-performance AI hardware (e.g. chips for driver assistance systems, smartphones and other mobile devices). The two researchers from the Institute of Theoretical Computer Science at Graz University of Technology (TU Graz) have optimized artificial neuronal networks in for image classification in such a way that the —similar to neurons in the brain—only need to send out signals relatively rarely and those that they do are very simple. The proven classification accuracy of images with this design is nevertheless very close to the current state of the art of current image classification tools.

Warehouse automation company Nimble Robotics today announced that it has raised a $50 million Series A. Led by DNS Capital and GSR Ventures and featuring Accel and Reinvent Capital, the round will go toward helping the company essentially double its headcount this year.

Founded by former Stanford PhD student Simon Kalouche, the system utilizes deep imitation learning – a popular concept in robotics research that helps systems map and improve through imitation.

“Instead of letting it sit in a lab for five years and creating this robotic application before it’s finally ready to deploy to the real world, we deployed it today,” says Kalouche. “It’s not fully autonomous – it’s autonomous maybe 90, 95% of the time. The other 5–10% is assisted by remote human operators, but it’s reliable on day one, and it’s reliable on day 10000.”

EA, Ubisoft, Warner Bros, and more explore how artificial intelligence innovations will lead to more believable open worlds and personal adventures within them.


Most NPCs simply patrol a specific area until the player interacts with them, at which point they try to become a more challenging target to hit. That’s fine in confined spaces, but in big worlds where NPCs have the freedom to roam, it just doesn’t scale. More advanced AI techniques such as machine learning – which uses algorithms to study incoming data, interpret it, and decide on a course of action in real-time – give AI agents much more flexibility and freedom. But developing them is time-consuming, computationally expensive, and a risk because it makes NPCs less predictable – hence the Assassin’s Creed Valhalla stalking situation.

However, as open-world and narrative-based games become more complex, and as modern PCs and consoles display ever more authentic and detailed environments, the need for more advanced AI techniques is growing. It’s going to be weird and alienating to be thrust into an almost photorealistic world filled with intricate systems and narrative possibilities, only to discover that non-player characters still act like soulless robots.

This is something the developers pushing the boundaries of open-world game design understand. Ubisoft, for example, has dedicated AI research teams at its Chengdu, Mumbai, Pune, and Montpelier studios, as well as a Strategic Innovation Lab in Paris and the Montreal studio’s La Forge lab, and is working with tech firms and universities on academic AI research topics.

https://youtube.com/watch?v=NOujMHH3LAU

Holograms deliver an exceptional representation of 3D world around us. Plus, they’re beautiful. (Go ahead — check out the holographic dove on your Visa card.) Holograms offer a shifting perspective based on the viewer’s position, and they allow the eye to adjust focal depth to alternately focus on foreground and background.

Researchers have long sought to make computer-generated holograms, but the process has traditionally required a supercomputer to churn through physics simulations, which is time-consuming and can yield less-than-photorealistic results. Now, MIT researchers have developed a new way to produce holograms almost instantly — and the deep learning-based method is so efficient that it can run on a laptop in the blink of an eye, the researchers say.

Circa 2010


About 48 kilometers off the eastern coast of the United States, scientists from Rutgers, the State University of New Jersey, peered over the side of a small research vessel, the Arabella. They had just launched RU27, a 2-meter-long oceanographic probe shaped like a torpedo with wings. Although it sported a bright yellow paint job for good visibility, it was unclear whether anyone would ever see this underwater robot again. Its mission, simply put, was to cross the Atlantic before its batteries gave out.

Unlike other underwater drones, RU27 and its kin are able to travel without the aid of a propeller. Instead, they move up and down through the top 100 to 200 meters of seawater by adjusting their buoyancy while gliding forward using their swept-back wings. With this strategy, they can go a remarkably long way on a remarkably small amount of energy.

When submerged and thus out of radio contact, RU27 steered itself with the aid of sensors that registered depth, heading, and angle from the horizontal. From those inputs, it could dead reckon about where it had glided since its last GPS navigational fix: Every 8 hours the probe broke the surface and briefly stuck its tail in the air, which exposed its GPS antenna as well as the antenna of an Iridium satellite modem. This allowed the vehicle to contact its operators, who were located in New Brunswick, N.J., in the Rutgers Coastal Ocean Observation Lab, or COOL Room.

Geoscientists at Sandia National Laboratories used 3D-printed rocks and an advanced, large-scale computer model of past earthquakes to understand and prevent earthquakes triggered by energy exploration.

Injecting water underground after unconventional oil and gas extraction, commonly known as fracking, geothermal energy stimulation and carbon dioxide sequestration all can trigger earthquakes. Of course, energy companies do their due diligence to check for faults—breaks in the earth’s upper crust that are prone to earthquakes—but sometimes earthquakes, even swarms of earthquakes, strike unexpectedly.

Sandia geoscientists studied how pressure and from injecting water can transfer through pores in rocks down to fault lines, including previously hidden ones. They also crushed rocks with specially engineered weak points to hear the sound of different types of fault failures, which will aid in early detection of an induced .

Researchers have published a study revealing their successful approach to designing much quieter propellers.

The Australian research team used machine learning to design their propellers, then 3D printed several of the most promising prototypes for experimental acoustic testing at the Commonwealth Scientific and Industrial Research Organisation’s specialized ‘echo-free’ chamber.

Results now published in Aerospace Research Central show the prototypes made around 15dB less noise than commercially available propellers, validating the team’s design methodology.