Toggle light / dark theme

“These are novel living machines. They are not a traditional robot or a known species of animals. It is a new class of artifacts: a living and programmable organism,” says Joshua Bongard, an expert in computer science and robotics at the University of Vermont (UVM) and one of the leaders of the find.

As the scientist explains, these living bots do not look like traditional robots : they do not have shiny gears or robotic arms. Rather, they look more like a tiny blob of pink meat in motion, a biological machine that researchers say can accomplish things traditional robots cannot.

Xenobots are synthetic organisms designed automatically by a supercomputer to perform a specific task, using a process of trial and error (an evolutionary algorithm), and are built by a combination of different biological tissues.

Since the DeepSpeed optimization library was introduced last year, it has rolled out numerous novel optimizations for training large AI models—improving scale, speed, cost, and usability. As large models have quickly evolved over the last year, so too has DeepSpeed. Whether enabling researchers to create the 17-billion-parameter Microsoft Turing Natural Language Generation (Turing-NLG) with state-of-the-art accuracy, achieving the fastest BERT training record, or supporting 10x larger model training using a single GPU, DeepSpeed continues to tackle challenges in AI at Scale with the latest advancements for large-scale model training. Now, the novel memory optimization technology ZeRO (Zero Redundancy Optimizer), included in DeepSpeed, is undergoing a further transformation of its own. The improved ZeRO-Infinity offers the system capability to go beyond the GPU memory wall and train models with tens of trillions of parameters, an order of magnitude bigger than state-of-the-art systems can support. It also offers a promising path toward training 100-trillion-parameter models.

ZeRO-Infinity at a glance: ZeRO-Infinity is a novel deep learning (DL) training technology for scaling model training, from a single GPU to massive supercomputers with thousands of GPUs. It powers unprecedented model sizes by leveraging the full memory capacity of a system, concurrently exploiting all heterogeneous memory (GPU, CPU, and Non-Volatile Memory express or NVMe for short). Learn more in our paper, “ZeRO-Infinity: Breaking the GPU Memory Wall for Extreme Scale Deep Learning.” The highlights of ZeRO-Infinity include:

When Open AI’s GPT-3 model made its debut in May of 2020, its performance was widely considered to be the literal state of the art. Capable of generating text indiscernible from human-crafted prose, GPT-3 set a new standard in deep learning. But oh what a difference a year makes. Researchers from the Beijing Academy of Artificial Intelligence announced on Tuesday the release of their own generative deep learning model, Wu Dao, a mammoth AI seemingly capable of doing everything GPT-3 can do, and more.

First off, Wu Dao is flat out enormous. It’s been trained on 1.75 trillion parameters (essentially, the model’s self-selected coefficients) which is a full ten times larger than the 175 billion GPT-3 was trained on and 150 billion parameters larger than Google’s Switch Transformers.

In order to train a model on this many parameters and do so quickly — Wu Dao 2.0 arrived just three months after version 1.0’s release in March — the BAAI researchers first developed an open-source learning system akin to Google’s Mixture of Experts, dubbed FastMoE. This system, which is operable on PyTorch, enabled the model to be trained both on clusters of supercomputers and conventional GPUs. This gave FastMoE more flexibility than Google’s system since FastMoE doesn’t require proprietary hardware like Google’s TPUs and can therefore run on off-the-shelf hardware — supercomputing clusters notwithstanding.

A team at Stony Brook University used ORNL’s Summit supercomputer to model x-ray burst flames spreading across the surface of dense neutron stars.

At the heart of some of the smallest and densest stars in the universe lies nuclear matter that might exist in never-before-observed exotic phases. Neutron stars, which form when the cores of massive stars collapse in a luminous supernova explosion, are thought to contain matter at energies greater than what can be achieved in particle accelerator experiments, such as the ones at the Large Hadron Collider and the Relativistic Heavy Ion Collider.

Although scientists cannot recreate these extreme conditions on Earth, they can use neutron stars as ready-made laboratories to better understand exotic matter. Simulating neutron stars, many of which are only 12.5 miles in diameter but boast around 1.4 to 2 times the mass of our sun, can provide insight into the matter that might exist in their interiors and give clues as to how it behaves at such densities.

The Israeli military is calling Operation Guardian of the Walls the first artificial-intelligence war. the IDF established an advanced AI technological platform that centralized all data on terrorist groups in the Gaza Strip onto one system that enabled the analysis and extraction of the intelligence.

The IDF used artificial intelligence and supercomputing during the last conflict with Hamas in the Gaza Strip.

The US Department of Energy on Thursday is officially dedicating Perlmutter, a next-generation supercomputer that will deliver nearly four exaflops of AI performance. The system, based at the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory, is the world’s fastest on the 16-bit and 32-bit mixed-precision math used for AI.

The HPE Cray system is being installed in two phases. Each of Phase 1’s GPU-accelerated nodes has four Nvidia A100 Tensor Core GPUs, for a total of 6159 Nvidia A100 Tensor Core GPUs. Each Phase 1 node also has a single AMD Milan CPU.

2 sticks of RAM giving you 1TB of memory will be the norm soon.

While consumers today typically use computers with 8GB or 16GB of DDR4 RAM inside, Samsung is pushing ahead with the next generation of memory modules. Its latest stick of RAM is a 512GB DDR5 module running at 7200Mbps.

The new module will be used in servers performing “the most extreme compute-hungry, high-bandwidth workloads.” That means supercomputers, artificial intelligence, and machine learning. It was made possible thanks to advanced HKMG technology, which Samsung adopted back in 2018 for its GDDR6 memory. Basically, HKMG replaces the insulator layer in DRAM structures. The high dielectric material contained in the layer reduces current leakage and therefore allows higher performance. At the same time, Samsung managed to reduce power usage in the new module by 13%.

“Samsung is the only semiconductor company with logic and memory capabilities and the expertise to incorporate HKMG cutting-edge logic technology into memory product development,” said Young-Soo Sohn, Vice President of the DRAM Memory Planning/Enabling Group at Samsung Electronics. “By bringing this type of process innovation to DRAM manufacturing, we are able to offer our customers high-performance, yet energy-efficient memory solutions to power the computers needed for medical research, financial markets, autonomous driving, smart cities and beyond.”

Keep watching to look at three of the most fantastic quantum breakthroughs that bring liberation and freedom to the world of science today! Subscribe to Futurity for more videos.

#quantum #quantumcomputing #google.

As we advance as a species, there are a lot of things that once seemed impossible a century ago that are now a reality. It’s called evolving. For example, there was a time when most people believed the earth was flat. Then Eratosthenes came onto the scene and proved that the world is round.

At the time, it was groundbreaking. But today, quantum mechanics rules the roost. This school of physics deals with the physical realm on the scale of atoms and electrons; thus making many of the equations in classical mechanics useless. With that being said, let’s take a look at three of the most amazing quantum breakthroughs that are bringing liberation and freedom to the world of science today!

We kick things off with a team of Chinese scientists claiming to have constructed a quantum computer that has the ability to perform certain computations almost 100 trillion times faster than the world’s most advanced supercomputer.

The breakthrough sheds light on quantum computational advantage—which is also famously known as quantum supremacy. But it’s become a hotly-contested tech race between Chinese researchers and some of the largest US tech corporations such as Amazon, Google, and Microsoft.
For example, Google announced in 2019 that it had constructed the first quantum computer that was able to perform a computation in under 200 seconds.