Toggle light / dark theme

The potential foray into “personal air mobility” was announced as part of Cadillac’s portfolio of luxury and EV vehicles. It included an autonomous shuttle and an electric vertical takeoff and landing (eVTOL) aircraft, or more commonly known as a flying car or air taxi.

Michael Simcoe, vice president of GM global design, said each concept reflected “the needs and wants of the passengers at a particular moment in time and GM’s vision of the future of transportation.”

“This is a special moment for General Motors as we reimagine the future of personal transportation for the next five years and beyond,” Simcoe said.

This robot will vacuum and serve you a martini all with one hand…ignore the dust in your glass please.


Two of the new robots are more futuristic, but one of Samsung’s new Bots will be available in the US this year — a robot vacuum that doubles as a home monitoring device.

Recently, there has been a reemergence of interest in optical computing platforms for artificial intelligence-related applications. Optics is ideally suited for realizing neural network models because of the high speed, large bandwidth and high interconnectivity of optical information processing. Introduced by UCLA researchers, Diffractive Deep Neural Networks (D2NNs) constitute such an optical computing framework, comprising successive transmissive and/or reflective diffractive surfaces that can process input information through light-matter interaction. These surfaces are designed using standard deep learning techniques in a computer, which are then fabricated and assembled to build a physical optical network. Through experiments performed at terahertz wavelengths, the capability of D2NNs in classifying objects all-optically was demonstrated. In addition to object classification, the success of D2NNs in performing miscellaneous optical design and computation tasks, including e.g., spectral filtering, spectral information encoding, and optical pulse shaping have also been demonstrated.

In their latest paper published in Light: Science & Applications, UCLA team reports a leapfrog advance in D2NN-based image classification accuracy through ensemble learning. The key ingredient behind the success of their approach can be intuitively understood through the experiment of Sir Francis Galton (1822–1911), an English philosopher and statistician, who, while visiting a livestock fair, asked the participants to guess the weight of an ox. None of the hundreds of participants succeeded in guessing the weight. But to his astonishment, Galton found that the median of all the guesses came quite close—1207 pounds, and was accurate within 1% of the true weight of 1198 pounds. This experiment reveals the power of combining many predictions in order to obtain a much more accurate prediction. Ensemble learning manifests this idea in machine learning, where an improved predictive performance is attained by combining multiple models.

In their scheme, UCLA researchers reported an ensemble formed by multiple D2NNs operating in parallel, each of which is individually trained and diversified by optically filtering their inputs using a variety of filters. 1252 D2NNs, uniquely designed in this manner, formed the initial pool of networks, which was then pruned using an iterative pruning algorithm, so that the resulting physical ensemble is not prohibitively large. The final prediction comes from a weighted average of the decisions from all the constituent D2NNs in an ensemble. The researchers evaluated the performance of the resulting D2NN ensembles on CIFAR-10 image dataset, which contains 60000 natural images categorized in 10 classes and is an extensively used dataset for benchmarking various machine learning algorithms. Simulations of their designed ensemble systems revealed that diffractive optical networks can significantly benefit from the ‘wisdom of the crowd’.

New research led by researchers at the University of Toronto (U of T) and Northwestern University employs machine learning to craft the best building blocks in the assembly of framework materials for use in a targeted application.

Computer-based artificial intelligence can function more like human intelligence when programmed to use a much faster technique for learning new objects, say two neuroscientists who designed such a model that was designed to mirror human visual learning.

Field tests validate tech that automatically links diverse radio waveforms in contested environments.

Like.

Comment.


A DARPA network technology program recently concluded field tests demonstrating novel software that bridges multiple disparate radio networks to enable communication between incompatible tactical radio data links – even in the presence of hostile jamming. The technology is transitioning to Naval Air Systems Command (NAVAIR) and the Marine Corps, which plans to put the software on a software reprogrammable multi-channel radio platform for use on aircraft and ground vehicles.

Started in 2016, the Dynamic Network Adaptation for Mission Optimization program, or DyNAMO, has developed technologies that enable automated, real-time dynamic configuration of tactical networks to ensure that heterogeneous radio nodes – whether on ground, air, or sea – can interoperate in a contested battlespace.

As a capstone event to conclude the program, DARPA recently demonstrated DyNAMO capabilities in over-the-air field tests at the Air Force Research Lab’s experimentation and test facility near Rome, New York. Diverse military tactical data links, including LINK 16, Tactical Targeting Networking Technology (TTNT), Common Data Link (CDL), and Wi-Fi networks were deployed to the test site. DyNAMO successfully provided uninterrupted network connectivity between all the data links under varying conditions in a simulated contested environment.

Next capture attempts scheduled to occur in spring of 2021

Like.

Comment.


Attempts at airborne retrieval of three unmanned air vehicles, nicknamed Gremlins, were just inches from success in DARPA’s latest flight test series that started on October 28. Each X-61A Gremlins Air Vehicle (GAV) flew for more than two hours, successfully validating all autonomous formation flying positions and safety features. Nine attempts were made at mechanical engagement of the GAVs to the docking bullet extended from a C-130 aircraft, but relative movement was more dynamic than expected and each GAV ultimately, safely parachuted to the ground.

“All of our systems looked good during the ground tests, but the flight test is where you truly find how things work,” said Scott Wierzbanowski, program manager for Gremlins in DARPA’s Tactical Technology Office. “We came within inches of connection on each attempt but, ultimately, it just wasn’t close enough to engage the recovery system.”

Hours of data were collected over three flights, including aerodynamic interactions between the docking bullet and GAV. Efforts are already underway to analyze that data, update models and designs, and conduct additional flights and retrieval attempts in a fourth deployment this spring.

A new “transforming” rover in development at NASA will be able to explore rough terrain unlike any rover before it.

DuAxle (short for dual-Axel) gets its name because it’s made of a combination of a pair of two-wheeled Axel rovers. The Axel rover is a simple, two-wheeled rover with a long tether that connects to a larger vehicle and stabilizes the rover as it descends into and explores craters that other rovers would not be able to handle. The Axel is equipped with a robotic arm that can collect samples, as well as stereoscopic cameras to gather imagery.