Toggle light / dark theme

IBM says it has built a quantum processor that it says cannot be simulated by a classical computer.

If true, the processor would represent a major breakthrough in quantum computing, which its proponents say could lead to radical changes in how we are able to deal with information.

The company says that the quantum processor is so capable that to simulate its capabilities with a traditional computer, one would require more bits than there are atoms in every person in existence.

Nanoscale machinery has many uses, including drug delivery, single-atom transistor technology, or memory storage. However, the machinery must be assembled at the nanoscale, which is a considerable challenge for researchers.

For nanotechnology engineers the ultimate goal is to be able to assemble functional machinery part-by-part at the nanoscale. In the macroscopic world, we can simply grab items to assemble them. It is not impossible to “grab” single anymore, but their quantum nature makes their response to manipulation unpredictable, limiting the ability to assemble molecules one by one. This prospect is now a step closer to reality, thanks to an international effort led by the Research Centre Jülich of the Helmholtz society in Germany, including researchers from the Department of Chemistry at the University of Warwick.

In the paper, “The stabilization potential of a standing molecule,” published today, 10 November 2021 in the journal Science Advances, an international team of researchers has been able to reveal the generic stabilization mechanism of a single standing molecule, which can be used in the rational and of three-dimensional at surfaces.

Apple and Meta are heading toward a collision course around wearables, AR/VR headsets and home devices. Also: Netflix and Apple mend fences around billing, Tim Cook talks cryptocurrency, and a new Apple Store is coming to Los Angeles. Finally, the App Store is dealt a loss in court.

For the past decade or so, Apple Inc.’s chief rival was considered to be Google. The two have gone toe-to-toe in smartphones, mobile operating systems, web services and home devices.

The next decade, however, could be defined by Apple’s rivalry with another Silicon Valley giant: Meta Platforms Inc.—the company known to everyone other than its own brand consultants as Facebook.

What kinds of ‘particles’ are allowed by nature? The answer lies in the theory of quantum mechanics, which describes the microscopic world.

In a bid to stretch the boundaries of our understanding of the world, UC Santa Barbara researchers have developed a device that could prove the existence of non-Abelian anyons, a that has been mathematically predicted to exist in two-dimensional space, but so far not conclusively shown. The existence of these particles would pave the way toward major advances in topological quantum computing.

In a study that appears in the journal Nature, physicist Andrea Young, his graduate student Sasha Zibrov and their colleagues have taken a leap toward finding conclusive evidence for non-Abelian anyons. Using graphene, an atomically thin material derived from graphite (a form of carbon), they developed an extremely low-defect, highly tunable device in which non-Abelian anyons should be much more accessible. First, a little background: In our three-dimensional universe, elementary particles can be either fermions or bosons: think electrons (fermions) or the Higgs (a boson).

An innovator in early AR systems has a dire prediction: the metaverse could change the fabric of reality as we know it.

Louis Rosenberg, a computer scientist and developer of the first functional AR system at the Air Force Research Laboratory, penned an op-ed in Big Think this weekend that warned the metaverse — an immersive VR and AR world currently being developed by The Company Formerly Known as Facebook — could create what sounds like a real life cyberpunk dystopia.

“I am concerned about the legitimate uses of AR by the powerful platform providers that will control the infrastructure,” Rosenberg wrote in the essay.

“If you want to measure something with very high precision, you almost always use an , because light makes for a very precise ruler,” says Jaime Cardenas, assistant professor of optics at the University of Rochester.

Now, the Cardenas Lab has created a way to make these optical workhorses even more useful and sensitive. Meiting Song, a Ph.D. student, has for the first time packaged an experimental way of amplifying interferometric signals—without a corresponding increase in extraneous, unwanted input, or “noise”—on a 1 mm by 1 mm integrated photonic . The breakthrough, described in Nature Communications, is based on a theory of weak value amplification with waveguides that was developed by Andrew Jordan, a professor of physics at Rochester, and students in his lab.

Now that crypto miners and their scalping ilk have succeeded in taking all of our precious GPU stock, it appears they’re now setting their sights on one more thing gamers cherish: the AMD CPU supply. According to a report in the UK’s Bitcoin Press, part of the reason it’s so hard to find a current-gen AMD CPU for sale anywhere is because of a crypto currency named Raptoreum that uses the CPU to mine instead of an ASIC or a GPU. Apparently, its mining is sped up significantly by the large L3 cache embedded in CPUs such as AMD Ryzen, Epyc, and Threadripper.

Raptoreum was designed as an anti-ASIC currency, as they wanted to keep the more expensive hardware solutions off their blockchain since they believed it lowered profits for everyone. To accomplish this they chose the Ghostrider mining algorithm, which is a combination of Cryptonite and x16r algorithms, and thew in some unique code to make it heavily randomized, thus its preference for L3 cache.

In case you weren’t aware, AMD’s high-end CPUs have more cache than their competitors from Intel, making them a hot item for miners of this specific currency. For example, a chip like the Threadripper 3990X has a chonky 256MB of L3 cache, but since that’s a $5,000 CPU, miners are settling for the still-beefy Ryzen chips. A CPU like the Ryzen 5900X has a generous 64MB of L3 cache compared to just 30MB on Intel’s Alder Lake CPUs, and just 16MB on Intel’s 11th-gen chips. Several models of AMD CPUs have this much cache too, not just the flagship silicon, including the previous-gen Ryen 9 3900X CPU. The really affordable models, such as the 5800X, have just 32MB of L3 cache, however.

O,.o woah!


This instructs qsim to make use of its cuQuantum integration, which provides improved performance on NVIDIA GPUs. If you experience issues with this option, please file an issue on the qsim repository.

After you finish, don’t forget to stop or delete your VM on the Compute Instances dashboard to prevent further billing.

You are now ready to run your own large simulations on Google Cloud. For sample code of a large circuit, see the Simulate a large circuit tutorial.