Toggle light / dark theme

Interesting insight on Aluminum Nitride used to create Qubits.

http:///articles/could-aluminum-nitride-be-engineered-to-produce-quantum-bitsInteresting insight.


Newswise — Quantum computers have the potential to break common cryptography techniques, search huge datasets and simulate quantum systems in a fraction of the time it would take today’s computers. But before this can happen, engineers need to be able to harness the properties of quantum bits or qubits.

Currently, one of the leading methods for creating qubits in materials involves exploiting the structural atomic defects in diamond. But several researchers at the University of Chicago and Argonne National Laboratory believe that if an analogue defect could be engineered into a less expensive material, the cost of manufacturing quantum technologies could be significantly reduced. Using supercomputers at the National Energy Research Scientific Computing Center (NERSC), which is located at the Lawrence Berkeley National Laboratory (Berkeley Lab), these researchers have identified a possible candidate in aluminum nitride. Their findings were published in Nature Scientific Reports.

“Silicon semiconductors are reaching their physical limits—it’ll probably happen within the next five to 10 years—but if we can implement qubits into semiconductors, we will be able to move beyond silicon,” says Hosung Seo, University of Chicago Postdoctoral Researcher and a first author of the paper.

Read more

I do love Nvidia!


During the past nine months, an Nvidia engineering team built a self-driving car with one camera, one Drive-PX embedded computer and only 72 hours of training data. Nvidia published an academic preprint of the results of the DAVE2 project entitled End to End Learning for Self-Driving Cars on arXiv.org hosted by the Cornell Research Library.

The Nvidia project called DAVE2 is named after a 10-year-old Defense Advanced Research Projects Agency (DARPA) project known as DARPA Autonomous Vehicle (DAVE). Although neural networks and autonomous vehicles seem like a just-invented-now technology, researchers such as Google’s Geoffrey Hinton, Facebook’s Yann Lecune and the University of Montreal’s Yoshua Bengio have collaboratively researched this branch of artificial intelligence for more than two decades. And the DARPA DAVE project application of neural network-based autonomous vehicles was preceded by the ALVINN project developed at Carnegie Mellon in 1989. What has changed is GPUs have made building on their research economically feasible.

Neural networks and image recognition applications such as self-driving cars have exploded recently for two reasons. First, Graphical Processing Units (GPU) used to render graphics in mobile phones became powerful and inexpensive. GPUs densely packed onto board-level supercomputers are very good at solving massively parallel neural network problems and are inexpensive enough for every AI researcher and software developer to buy. Second, large, labeled image datasets have become available to train massively parallel neural networks implemented on GPUs to see and perceive the world of objects captured by cameras.

Read more

Post-quantum cryptography discussion in Tacoma WA on May 5th discussing hacking by QC hackers and leveraging Cryptography algorithms to offset the attacks; may be of interest to sit in and even join in the debates. I will try attend if I can because it would be interesting to see the arguments raised and see the responses.


The University of Washington Tacoma Institute of Technology will present a discussion about the esoteric field of post-quantum cryptography at the Northwest Cybersecurity Symposium on May 5.

“I’ve been researching post-quantum cryptography for years, finding ways to protect against a threat that doesn’t yet exist,” said Anderson Nascimento, assistant professor of computer science at the institute, in a release.

Post-quantum cryptography refers to encryption that would be secure against an attack by a quantum computer — a kind of supercomputer using quantum mechanics, which, so far, exists only in theory.

Read more

RPI’s new material takes semiconducting transistors to new levels.


Two-dimensional phosphane, a material known as phosphorene, has potential application as a material for semiconducting transistors in ever faster and more powerful computers. But there’s a hitch. Many of the useful properties of this material, like its ability to conduct electrons, are anisotropic, meaning they vary depending on the orientation of the crystal. Now, a team including researchers at Rensselaer Polytechnic Institute (RPI) has developed a new method to quickly and accurately determine that orientation using the interactions between light and electrons within phosphorene and other atoms-thick crystals of black phosphorus. Phosphorene—a single layer of phosphorous atoms—was isolated for the first time in 2014, allowing physicists to begin exploring its properties experimentally and theoretically. Vincent Meunier, head of the Rensselaer Department of Physics, Applied Physics, and Astronomy and a leader of the team that developed the new method, published his first paper on the material—confirming the structure of phosphorene—in that same year.

“This is a really interesting material because, depending on which direction you do things, you have completely different properties,” said Meunier, a member of the Rensselaer Center for Materials, Devices, and Integrated Systems (cMDIS). “But because it’s such a new material, it’s essential that we begin to understand and predict its intrinsic properties.”

Meunier and researchers at Rensselaer contributed to the theoretical modeling and prediction of the properties of phosphorene, drawing on the Rensselaer supercomputer, the Center for Computational Innovations (CCI), to perform calculations. Through the Rensselaer cMDIS, Meunier and his team are able to develop the potential of new materials such as phosphorene to serve in future generations of computers and other devices. Meunier’s research exemplifies the work being done at The New Polytechnic, addressing difficult and complex global challenges, the need for interdisciplinary and true collaboration, and the use of the latest tools and technologies, many of which are developed at Rensselaer.

Read more

Supercomputer facing problems?


In the world of High Performance Computing (HPC), supercomputers represent the peak of capability, with performance measured in petaFLOPs (1015 operations per second). They play a key role in climate research, drug research, oil and gas exploration, cryptanalysis, and nuclear weapons development. But after decades of steady improvement, changes are coming as old technologies start to run into fundamental problems.

When you’re talking about supercomputers, a good place to start is the TOP500 list. Published twice a year, it ranks the world’s fastest machines based on their performance on the Linpack benchmark, which solves a dense system of linear equations using double precision (64 bit) arithmetic.

Looking down the list, you soon run into some numbers that boggle the mind. The Tianhe-2 (Milky Way-2), a system deployed at the National Supercomputer Center in Guangzho, China, is the number one system as of November 2015, a position it’s held since 2013. Running Linpack, it clocks in at 33.86 x 1015 floating point operations per second (33.86 PFLOPS).

Read more

Now, I have been hearing folks are planning to experiment with block chaining on the new Nvidia DGX-1. I do know Nvidia’s CEO mentioned that DGX-1 could be used in conjunction with block chaining as an interim step to Quatum Computing to help secure information. We’ll see.


Sterling Heights, MI (PRWEB) April 24, 2016.

Rave Computer, an Elite Solution Provider in the NVIDIA Partner Network program, today announced that it has been selected to offer the new NVIDIA® DGX-1™ deep learning system, the world’s first deep learning supercomputer designed to meet the unlimited computing demands of artificial intelligence.

Rave will support NVIDIA’s marketing and sales efforts by qualifying, educating, and managing potential customers’ requirements and orders. Rave will also actively be involved in driving market awareness and demand development.

Read more

Ralph Merkle, Robert Freitas and others have a theoretical design for a molecular mechanical computer that would be 100 billion times more energy efficient than the most energy efficient conventional green supercomputer. Removing the need for gears, clutches, switches, springs makes the design easier to build.

Existing designs for mechanical computing can be vastly improved upon in terms of the number of parts required to implement a complete computational system. Only two types of parts are required: Links, and rotary joints. Links are simply stiff, beam-like structures. Rotary joints are joints that allow rotational movement in a single plane.

Simple logic and conditional routing can be accomplished using only links and rotary joints, which are solidly connected at all times. No gears, clutches, switches, springs, or any other mechanisms are required. An actual system does not require linear slides.

Read more

Conditions in the vast universe can be quite extreme: Violent collisions scar the surfaces of planets. Nuclear reactions in bright stars generate tremendous amounts of energy. Gigantic explosions catapult matter far out into space. But how exactly do processes like these unfold? What do they tell us about the universe? And could their power be harnessed for the benefit of humankind?

To find out, researchers from the Department of Energy’s SLAC National Accelerator Laboratory perform sophisticated experiments and computer simulations that recreate violent cosmic conditions on a small scale in the lab.

“The field of is growing very rapidly, fueled by a number of technological breakthroughs,” says Siegfried Glenzer, head of SLAC’s High Energy Density Science Division. “We now have high-power lasers to create extreme states of matter, cutting-edge X-ray sources to analyze these states at the atomic level, and high-performance supercomputers to run complex simulations that guide and help explain our experiments. With its outstanding capabilities in these areas, SLAC is a particularly fertile ground for this type of research.”

Read more