Toggle light / dark theme

Circa 2020 o,.o!


Every robot is, at its heart, a computer that can move. That is true from the largest plane-sized flying machines down to the smallest of controllable nanomachines, small enough to someday even navigate through blood vessels.

New research, published August 26 in Nature, shows that it is possible to build legs into robots mere microns in length. When powered by lasers, these tiny machines can move, and some day, they may save lives in operating rooms or even, possibly, on the battlefield.

This project, funded in part by the Army Research Office and the Air Force Office of Scientific Research, demonstrated that, adapting principles from origami, nano-scale legged robots could be printed and then directed.

But what struck me about his essay is that last clause: “if we as a society manage it responsibly.” Because, as Altman also admits, if he is right then A.I. will generate phenomenal wealth largely by destroying countless jobs — that’s a big part of how everything gets cheaper — and shifting huge amounts of wealth from labor to capital. And whether that world becomes a post-scarcity utopia or a feudal dystopia hinges on how wealth, power and dignity are then distributed — it hinges, in other words, on politics.


Will A.I. give us the lives of leisure we long for — or usher in a feudal dystopia? It depends.

Deep neural networks have achieved highly promising results on several tasks, including image and text classification. Nonetheless, many of these computational methods are prone to what is known as catastrophic forgetting, which essentially means that when they are trained on a new task, they tend to rapidly forget how to complete tasks they were trained to complete in the past.

Researchers at Université Paris-Saclay-CNRS recently introduced a new technique to alleviate forgetting in binarized . This technique, presented in a paper published in Nature Communications, is inspired by the idea of synaptic metaplasticity, the process through which synapses (junctions between two ) adapt and change over time in response to experiences.

“My group had been working on binarized neural networks for a few years,” Damien Querlioz, one of the researchers who carried out the study, told TechXplore. “These are a highly simplified form of deep neural networks, the flagship method of modern artificial intelligence, which can perform complex tasks with reduced memory requirements and energy consumption. In parallel, Axel, then a first-year Ph.D. student in our group, started to work on the synaptic metaplasticity models introduced in 2005 by Stefano Fusi.”

Analog AI processor company Mythic launched its M1076 Analog Matrix Processor today to provide low-power AI processing.

The company uses analog circuits rather than digital to create its processor, making it easier to integrate memory into the processor and operate its device with 10 times less power than a typical system-on-chip or graphics processing unit (GPU).

The M1076 AMP can support up to 25 trillion operations per second (TOPS) of AI compute in a 3-watt power envelope. It is targeted at AI at the edge applications, but the company said it can scale from the edge to server applications, addressing multiple vertical markets including smart cities, industrial applications, enterprise applications, and consumer devices.

Designing an autonomous, learning smart garden.


In the first episode of Build Out, Colt and Reto — tasked with designing the architecture for a “Smart Garden” — supplied two very different concepts, that nevertheless featured many overlapping elements. Take a look at the video to see what they came up with, then continue reading to see how you can learn from their explorations to build your very own Smart Garden.

Both solutions aim to optimize plant care using sensors, weather forecasts, and machine learning. Watering and fertilizing routines for the plants are updated regularly to guarantee the best growth, health, and fruit yield possible.

Colt’s solution is optimized for small-scale home farming, using a modified CNC machine to care for a fruit or vegetable patch. The drill bit is replaced with a liquid spout, UV light, and camera, while the cutting area is replaced with a plant bed that includes sensors to track moisture, nutrient levels, and weight.

It’s ten times more powerful than the current U.S. effort.


Earlier this month, Chinese artificial intelligence (A.I.) researchers at the Beijing Academy of Artificial Intelligence (BAAI) unveiled Wu Dao 2.0, the world’s biggest natural language processing (NLP) model. And it’s a big deal.

NLP is a branch of A.I. research that aims to give computers the ability to understand text and spoken words and respond to them in much the same way human beings can.

Last year, the San Francisco–based nonprofit A.I. research laboratory OpenAI wowed the world when it released its GPT-3 (Generative Pre-trained Transformer 3) language model. GPT-3 is a 175 billion–parameter deep learning model trained on text datasets with hundreds of billions of words. A parameter is a calculation in a neural network that shapes the model’s data by assigning to each chunk a greater or lesser weighting, thus providing the neural network a learned perspective on the data.

Since the DeepSpeed optimization library was introduced last year, it has rolled out numerous novel optimizations for training large AI models—improving scale, speed, cost, and usability. As large models have quickly evolved over the last year, so too has DeepSpeed. Whether enabling researchers to create the 17-billion-parameter Microsoft Turing Natural Language Generation (Turing-NLG) with state-of-the-art accuracy, achieving the fastest BERT training record, or supporting 10x larger model training using a single GPU, DeepSpeed continues to tackle challenges in AI at Scale with the latest advancements for large-scale model training. Now, the novel memory optimization technology ZeRO (Zero Redundancy Optimizer), included in DeepSpeed, is undergoing a further transformation of its own. The improved ZeRO-Infinity offers the system capability to go beyond the GPU memory wall and train models with tens of trillions of parameters, an order of magnitude bigger than state-of-the-art systems can support. It also offers a promising path toward training 100-trillion-parameter models.

ZeRO-Infinity at a glance: ZeRO-Infinity is a novel deep learning (DL) training technology for scaling model training, from a single GPU to massive supercomputers with thousands of GPUs. It powers unprecedented model sizes by leveraging the full memory capacity of a system, concurrently exploiting all heterogeneous memory (GPU, CPU, and Non-Volatile Memory express or NVMe for short). Learn more in our paper, “ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning.” The highlights of ZeRO-Infinity include:

When you put these three factors together—the bounty of technological advances, the compressed restructuring timetable due to covid-19, and an economy finally running at full capacity—the ingredients are in place for a productivity boom. This will not only boost living standards directly, but also frees up resources for a more ambitious policy agenda.


AI and other digital technologies have been surprisingly slow to improve economic growth. But that could be about to change.

Researchers at Oxford University have developed an AI-enabled system that can comprehensively identify people in videos by conducting detective-like, multi-domain investigations as to who they might be, from context, and from a variety of publicly available secondary sources, including the matching of audio sources with visual material from the internet.

Though the research centers on the identification of public figures, such as people appearing in television programs and films, the principle of inferring identity from context is theoretically applicable to anyone whose face, voice, or name appears in online sources.

Indeed, the paper’s own definition of fame is not limited to show business workers, with the researchers declaring ‘We denote people with many images of themselves online as famous‘.