How to check the trends of Supercomputing Progress, and how this is as close to a pure indicator of technological progress rates as one can find. The recent flattening of this trend has revealed a flattening in all technological and economic progress relative to long-term trendlines.
For many years, a bottleneck in technological development has been how to get processors and memories to work faster together. Now, researchers at Lund University in Sweden have presented a new solution integrating a memory cell with a processor, which enables much faster calculations, as they happen in the memory circuit itself.
In an article in Nature Electronics, the researchers present a new configuration, in which a memory cell is integrated with a vertical transistor selector, all at the nanoscale. This brings improvements in scalability, speed and energy efficiency compared with current mass storage solutions.
The fundamental issue is that anything requiring large amounts of data to be processed, such as AI and machine learning, requires speed and more capacity. For this to be successful, the memory and processor need to be as close to each other as possible. In addition, it must be possible to run the calculations in an energy-efficient manner, not least as current technology generates high temperatures with high loads.
I am happy to say that my recently published computational COVID-19 research has been featured in a major news article by HPCwire! I led this research as CTO of Conduit. My team utilized one of the world’s top supercomputers (Frontera) to study the mechanisms by which the coronavirus’s M proteins and E proteins facilitate budding, an understudied part of the SARS-CoV-2 life cycle. Our results may provide the foundation for new ways of designing antiviral treatments which interfere with budding. Thank you to Ryan Robinson (Conduit’s CEO) and my computational team: Ankush Singhal, Shafat M., David Hill, Jr., Tamer Elkholy, Kayode Ezike, and Ricky Williams.
Conduit, created by MIT graduate (and current CEO) Ryan Robinson, was founded in 2017. But it might not have been until a few years later, when the pandemic started, that Conduit may have found its true calling. While Conduit s commercial division is busy developing a Covid-19 test called nanoSPLASH, its nonprofit arm was granted access to one of the most powerful supercomputers in the world Frontera, at the Texas Advanced Computing Center (TACC) to model the budding process of SARS-CoV-2.
Budding, the researchers explained, is how the virus genetic material is encapsulated in a spherical envelope and the process is key to the virus ability to infect. Despite that, they say, it has hitherto been poorly understood:
The Conduit team comprised of Logan Thrasher Collins (CTO of Conduit), Tamer Elkholy, Shafat Mubin, David Hill, Ricky Williams, Kayode Ezike and Ankush Singhal sought to change that, applying for an allocation from the White House-led Covid-19 High-Performance Computing Consortium to model the budding process on a supercomputer.
Computer maintenance workers at Kyoto University have announced that due to an apparent bug in software used to back up research data, researchers using the University’s Hewlett-Packard Cray computing system, called Lustre, have lost approximately 77 terabytes of data. The team at the University’s Institute for Information Management and Communication posted a Failure Information page detailing what is known so far about the data loss.
The team, with the University’s Information Department Information Infrastructure Division, Supercomputing, reported that files in the /LARGEO (on the DataDirect ExaScaler storage system) were lost during a system backup procedure. Some in the press have suggested that the problem arose from a faulty script that was supposed to delete only old, unneeded log files. The team noted that it was originally thought that approximately 100TB of files had been lost, but that number has since been pared down to 77TB. They note also that the failure occurred on December 16 between the hours of 5:50 and 7pm. Affected users were immediately notified via emails. The team further notes that approximately 34 million files were lost and that the files lost belonged to 14 known research groups. The team did not release information related to the names of the research groups or what sort of research they were conducting. They did note data from another four groups appears to be restorable.
Unfortunately, some of the data is lost forever. 🧐
A routine backup procedure meant to safeguard data of researchers at Kyoto University in Japan went awry and deleted 77 terabytes of data, Gizmodo reported. The incident occurred between December 14 and 16, first came to light on the 16th, and affected as many as 14 research groups at the university.
Supercomputers are the ultimate computing devices available to researchers as they try to answer complex questions on a range of topics from molecular modeling to oil exploration, climate change models to quantum mechanics, to name a few. Capable of making hundred quadrillion operations a second, these computers are not only expensive to build but also to operate, costing hundreds of dollars for every hour of operation.
According to Bleeping Computer that originally reported the mishap, the university uses Cray supercomputers with the top system employing 122,400 computing cores. The memory on the system though is limited to approximately 197 terabytes and therefore, an Exascaler data storage system is used, which can transfer 150 GB of data per second and store up to 24 petabytes of information.
The breakthrough was made during a pilot program that saw LightOn collaborate with GENCI and IDRIS. Igor Carron, LightOn’s CEO and co-founder said in a press release: “This pilot program integrating a new computing technology within one of the world’s Supercomputers would not have been possible without the particular commitment of visionary agencies such as GENCI and IDRIS/CNRS. Together with the emergence of Quantum Computing, this world premiere strengthens our view that the next step after exascale supercomputing will be about hybrid computing.”
The technology will now be offered to select users of the Jean Zay research community over the next few months who will use the device to undertake research on machine learning foundations, differential privacy, satellite imaging analysis, and natural language processing (NLP) tasks. LightOn’s technology has already been successfully used by a community of researchers since 2018.
PARIS, Dec. 23, 2021 – LightOn announces the integration of one of its photonic co-processors in the Jean Zay supercomputer, one of the Top500 most powerful computers in the world. Under a pilot program with GENCI and IDRIS, the insertion of a cutting-edge analog photonic accelerator into High Performance Computers (HPC) represents a technological breakthrough and a world-premiere. The LightOn photonic co-processor will be available to selected users of the Jean Zay research community over the next few months.
LightOn’s Optical Processing Unit (OPU) uses photonics to speed up randomized algorithms at a very large scale while working in tandem with standard silicon CPU and NVIDIA latest A100 GPU technology. The technology aims to reduce the overall computing time and power consumption in an area that is deemed “essential to the future of computational science and AI for Science” according to a 2021 U.S. Department of Energy report on “Randomized Algorithms for Scientific Computing.”
INRIA (France’s Institute for Research in Computer Science and Automation) researcher Dr. Antoine Liutkus provided additional context to the integration of LightOn’s coprocessor in the Jean Zay supercomputer: “Our research is focused today on the question of large-scale learning. Integrating an OPU in one of the most powerful nodes of Jean Zay will give us the keys to carry out this research, and will allow us to go beyond a simple ” proof of concept.”
You are on the PRO Robots channel and in this video we invite you to find out what is new with Elon Musk, what has been done and what is yet to come. What are the difficulties with the Starlink project and why the problems with the launch of Starship may lead to the bankruptcy of SpaceX, what is new with Tesla, what new products will please the company next year — and this is not just about electric cars! All this and much more in this issue of news from Elon Musk!
0:00 In this video. 0:22 The reason SpaceX may go bankrupt. 1:39 Starship test. 2:07 24 hours of Starbase SpaceX in Texas. 2:33 SpaceX completes work on orbital launch pad. 3:30 Company outlook. 3:59 Starlink deadline pushed back. 5:00 Blue Origin lost a lawsuit against NASA 5:39 Tesla to begin production in Berlin. 6:15 Cybertruck. 7:01 Starlink terminals. 7:24 SolarCity. 7:47 Tesla Smartphones. 8:28 Tesla Dojo supercomputer.
The predicted existence of an exotic particle made up of six elementary particles known as quarks by RIKEN researchers could deepen our understanding of how quarks combine to form the nuclei of atoms.
Quarks are the fundamental building blocks of matter. The nuclei of atoms consist of protons and neutrons, which are in turn made up of three quarks each. Particles consisting of three quarks are collectively known as baryons.
Scientists have long pondered the existence of systems containing two baryons, which are known as dibaryons. Only one dibaryon exists in nature—deuteron, a hydrogen nucleus made up of a proton and a neutron that are very lightly bound to each other. Glimpses of other dibaryons have been caught in nuclear-physics experiments, but they had very fleeting existences.
Recently, a research team at Osaka University has successfully demonstrated the generation of megatesla (MT)-order magnetic fields via three-dimensional particle simulations on laser-matter interaction. The strength of MT magnetic fields is 1–10 billion times stronger than geomagnetism (0.3–0.5 G), and these fields are expected to be observed only in the close vicinity of celestial bodies such as neutron stars or black holes. This result should facilitate an ambitious experiment to achieve MT-order magnetic fields in the laboratory, which is now in progress.
Since the 19th century, scientists have strived to achieve the highest magnetic fields in the laboratory. To date, the highest magnetic field observed in the laboratory is in the kilotesla (kT)-order. In 2020, Masakatsu Murakami at Osaka University proposed a novel scheme called microtube implosions (MTI) to generate ultrahigh magnetic fields on the MT-order. Irradiating a micron-sized hollow cylinder with ultraintense and ultrashort laser pulses generates hot electrons with velocities close to the speed of light. Those hot electrons launch a cylindrically symmetric implosion of the inner wall ions towards the central axis. An applied pre-seeded magnetic field of the kilotesla-order, parallel to the central axis, bends the trajectories of ions and electrons in opposite directions because of the Lorentz force. Near the target axis, those bent trajectories of ions and electrons collectively form a strong spin current that generates MT-order magnetic fields.
In this study, one of the team members, Didar Shokov, has extensively conducted three-dimensional simulations using the supercomputer OCTOPUS at Osaka University’s Cybermedia Center. As a result, a distinct scaling law has been found relating the performance of the generation of the magnetic fields by MTI and such external parameters as applied laser intensity, laser energy, and target size.