Toggle light / dark theme

In simple terms, comparing previous autonomy standards with that of Exyn is like the difference between self-navigating a single, defined road versus uncharted terrain in unknown and unmapped territory. Unlike a car, however, a drone must be able to manoeuvre within three dimensions and pack all its intelligence and sensors onto a fraction of the total body size with severe weight restrictions.

“People have been talking about Level 4 Autonomy in driverless cars for some time, but having that same degree of intelligence condensed onboard a self-sufficient UAV is an entirely different engineering challenge in and of itself,” said Jason Derenick, CTO at Exyn Technologies. “Achieving Level 5 is the holy grail of autonomous systems – this is when the drone can demonstrate 100% control in an unbounded environment, without any input from a human operator whatsoever. While I don’t believe we will witness this in my lifetime, I do believe we will push the limits of what’s possible with advanced Level 4. We are already working on attaining Level 4B autonomy with swarms, or collaborative multi-robot systems.”

“There’s things that we want to do to make it faster, make it higher resolution, make it more accurate,” said Elm, in an interview with Forbes. “But the other thing we were kind of contemplating is basically the ability to have multiple robots collaborate with each other so you can scale the problem – both in terms of scale and scope. So you can have multiple identical robots on a mission, so you can actually now cover a larger area, but also have specialised robots that might be different. So, heterogeneous swarms so they can actually now have specialised tasks and collaborate with each other on a mission.”

NASA is returning to sizzling Venus, our closest yet perhaps most overlooked neighbour, after decades of exploring other worlds.

The US space agency’s new administrator, Bill Nelson, announced two new robotic missions to the solar system’s hottest planet, during his first major address to employees.

“These two sister missions both aim to understand how Venus became an inferno-like world capable of melting lead at the surface,” Nelson said.

For reference, we can go back to the HRNet paper. The researchers used a dedicated Nvidia V100, a massive and extremely expensive GPU specially designed for deep learning inference. With no memory limitation and no hindrance by other in-game computations, the inference time for the V100 was 150 milliseconds per input, which is ~7 fps, not nearly enough to play a smooth game.

Development and training neural networks

Another vexing problem is the development and training costs of the image-enhancing neural network. Any company that would want to replicate Intel’s deep learning models will need three things: data, computing resources, and machine learning talent.

Elevate your enterprise data technology and strategy at Transform 2021. One of the biggest highlights of Build, Microsoft’s annual software development conference, was the presentation of a tool that uses deep learning to generate source code for office applications. The tool uses GPT-3, a massive language model developed by OpenAI last year and made available to select […].

Enlight uses light polarization to maximize resolution and to find critical defects in half the time of the typical optical scanner. The scanner for the first time will capture both direct light bouncing off the wafer surface, and scattered light, known as “brightfield” and “greyfield,” respectively. That’s like scanning two things in one pass, cutting in half the time required.

Natural Language Processing (NLP) has seen rapid progress in recent years as computation at scale has become more available and datasets have become larger. At the same time, recent work has shown large language models to be effective few-shot learners, with high accuracy on many NLP datasets without additional finetuning. As a result, state-of-the-art NLP models have grown at an exponential rate (Figure 1). Training such models, however, is challenging for two reasons:

Unlike in other years, this year’s Microsoft Build developer conference is not packed with huge surprises — but there’s one announcement that will surely make developers’ ears perk up: The company is now using OpenAI’s massive GPT-3 natural language model in its no-code/low-code Power Apps service to translate spoken text into code in its recently announced Power Fx language.

Now don’t get carried away. You’re not going to develop the next TikTok while only using natural language. Instead, what Microsoft is doing here is taking some of the low-code aspects of a tool like Power Apps and using AI to essentially turn those into no-code experiences, too. For now, the focus here is on Power Apps formulas, which despite the low-code nature of the service, is something you’ll have to write sooner or later if you want to build an app of any sophistication.

“Using an advanced AI model like this can help our low-code tools become even more widely available to an even bigger audience by truly becoming what we call no code,” said Charles Lamanna, corporate vice president for Microsoft’s low-code application platform.