Toggle light / dark theme

Researchers at Lawrence Berkeley National Laboratory’s Advanced Quantum Testbed (AQT) demonstrated that an experimental method known as randomized compiling (RC) can dramatically reduce error rates in quantum algorithms and lead to more accurate and stable quantum computations. No longer just a theoretical concept for quantum computing, the multidisciplinary team’s breakthrough experimental results are published in Physical Review X.

The experiments at AQT were performed on a four-qubit superconducting quantum processor. The researchers demonstrated that RC can suppress one of the most severe types of errors in quantum computers: coherent errors.

Akel Hashim, AQT researcher, involved in the experimental breakthrough and a graduate student at the University of California, Berkeley explained: “We can perform quantum computations in this era of noisy intermediate-scale quantum (NISQ) computing, but these are very noisy, prone to errors from many different sources, and don’t last very long due to the decoherence—that is, information loss—of our qubits.”

By Stina Andersson and Ellinor Wanzambi

Researchers have been working on quantum algorithms since physicists first proposed using principles of quantum physics to simulate nature decades. One important component in many quantum algorithms is quantum walks, which are the quantum equivalent of the classical Markov chain, i.e., a random walk without memory. Quantum walks are used in algorithms in areas such as searching, node ranking in networks, and element distinctness.

Consider the graph in Figure 1 and imagine that we randomly want to move between nodes A, B, C, and D in the graph. We can only move between nodes that are connected by an edge, and each edge has an associated probability that decides how likely we are to move to the connected node. This is a random walk. In this article, we are working only with Markov chains, also called the memory-less random walks, meaning that the probabilities are independent of the previous steps. For example, the probabilities of arriving at node A are the same no matter if we got there from node B or node D.

Quantum computers have the potential to solve important problems that are beyond reach even for the most powerful supercomputers, but they require an entirely new way of programming and creating algorithms.

Universities and major tech companies are spearheading research on how to develop these new algorithms. In a recent collaboration between University of Helsinki, Aalto University, University of Turku, and IBM Research Europe-Zurich, a team of researchers have developed a new method to speed up calculations on quantum computers. The results are published in the journal PRX Quantum of the American Physical Society.

“Unlike classical computers, which use bits to store ones and zeros, information is stored in the qubits of a quantum processor in the form of a , or a wavefunction,” says postdoctoral researcher Guillermo García-Pérez from the Department of Physics at the University of Helsinki, first author of the paper.

Recent theoretical breakthroughs have settled two long-standing questions about the viability of simulating quantum systems on future quantum computers, overcoming challenges from complexity analyses to enable more advanced algorithms. Featured in two publications, the work by a quantum team at Los Alamos National Laboratory shows that physical properties of quantum systems allow for faster simulation techniques.

“Algorithms based on this work will be needed for the first full-scale demonstration of quantum simulations on quantum computers,” said Rolando Somma, a quantum theorist at Los Alamos and coauthor on the two papers.

Most physicists and philosophers now agree that time is emergent while Digital Presentism denotes: Time emerges from complex qualia computing at the level of observer experiential reality. Time emerges from experiential data, it’s an epiphenomenon of consciousness. From moment to moment, you are co-writing your own story, co-producing your own “participatory reality” — your stream of consciousness is not subject to some kind of deterministic “script.” You are entitled to degrees of freedom. If we are to create high fidelity first-person simulated realities that also may be part of intersubjectivity-based Metaverse, then D-Theory of Time gives us a clear-cut guiding principle for doing just that.

Here’s Consciousness: Evolution of the Mind (2021) documentary, Part III: CONSCIOUSNESS & TIME #consciousness #evolution #mind #time #DTheoryofTime #DigitalPresentism #CyberneticTheoryofMind


Watch the full documentary on Vimeo on demand: https://vimeo.com/ondemand/339083

*Based on recent book The Syntellect Hypothesis: Five Paradigms of the Mind’s Evolution (2020) by evolutionary cyberneticist Alex M. Vikoulov, available as eBook, paperback, hardcover, and audiobook on Amazon: https://www.amazon.com/Syntellect-Hypothesis-Paradigms-Minds-Evolution/dp/1733426140

To us humans, to be alive is to perceive the flow of time. Our perception of time is linear – we remember the past, we live in the present and we look forward to the future.

In systemizing consciousness studies some recent progress has been made, but the temporal dimension of consciousness, notably the D-Theory of Time might be at least as essential to our understanding of what we call human consciousness.

Our experience of time can be understood as a fractal dimension, not even a half dimension – we are subjected to our species-specific algorithmic sense of time flow. What’s necessary for completion of quantum information processing, though, is a collapse of possibilities – “many worlds” collapsing into an observer’s temporal singularity, i.e., the present moment which happens approximately every 1/10 of a second. Between conscious moments lie incredibly vast and “eternally long” potentialities of something happening. But rest assured, you will experience a sequence of those “digital” moments which gives you a sense of subjective reality.

Is time fundamental or emergent? How does time exist, if at all? How can we update the current epistemic status of temporal ontology? Digital Presentism: D-Theory of Time outlines a new theory of time, for to understand our experiential reality and consciousness, we need to understand TIME.

Consciousness and time are intimately interwoven. Time is change (between static 3D frames), 4th dimension. The flow of time is a rate of change, computation, and conscious awareness is a stream of realized probabilistic outcomes.

Summary: A new machine-learning algorithm could help practitioners identify autism in children more effectively.

Source: USC

For children with autism spectrum disorder (ASD), receiving an early diagnosis can make a huge difference in improving behavior, skills and language development. But despite being one of the most common developmental disabilities, impacting 1 in 54 children in the U.S., it’s not that easy to diagnose.

Games have a long history of serving as a benchmark for progress in artificial intelligence. Recently, approaches using search and learning have shown strong performance across a set of perfect information games, and approaches using game-theoretic reasoning and learning have shown strong performance for specific imperfect information poker variants. We introduce, a general-purpose algorithm that unifies previous approaches, combining guided search, self-play… See more.


Games have a long history of serving as a benchmark for progress in.

Artificial intelligence. Recently, approaches using search and learning have.

Shown strong performance across a set of perfect information games, and.
approaches using game-theoretic reasoning and learning have shown strong.
performance for specific imperfect information poker variants. We introduce.
Player of Games, a general-purpose algorithm that unifies previous approaches.

Combining guided search, self-play learning, and game-theoretic reasoning.
Player of Games is the first algorithm to achieve strong empirical performance.

In large perfect and imperfect information games — an important step towards.

Truly general algorithms for arbitrary environments. We prove that Player of.
Games is sound, converging to perfect play as available computation time and.

Approximation capacity increases.

The idiom “actions speak louder than words” first appeared in print almost 300 years ago. A new study echoes this view, arguing that combining self-supervised and offline reinforcement learning (RL) could lead to a new class of algorithms that understand the world through actions and enable scalable representation learning.

Machine learning (ML) systems have achieved outstanding performance in domains ranging from computer vision to speech recognition and natural language processing, yet still struggle to match the flexibility and generality of human reasoning. This has led ML researchers to search for the “missing ingredient” that might boost these systems’ ability to understand, reason and generalize.

In the paper Understanding the World Through Action, UC Berkeley assistant professor in the department of electrical engineering and computer sciences Sergey Levine suggests that a general, principled, and powerful framework for utilizing unlabelled data could be derived from RL to enable ML systems leveraging large datasets to better understand the real world.

To set some benchmarks for their simulator, the researchers tried out three different design algorithms working in conjunction with a deep reinforcement learning algorithm that learned to control the robots through many rounds of trial and error.

The co-designed bots performed well on the simpler tasks, like walking or carrying things, but struggled with tougher challenges, like catching and lifting, suggesting there’s plenty of scope for advances in co-design algorithms. Nonetheless, the AI-designed bots outperformed ones design by humans on almost every task.

Intriguingly, many of the co-design bots took on similar shapes to real animals. One evolved to resemble a galloping horse, while another, set the task of climbing up a chimney, evolved arms and legs and clambered up somewhat like a monkey.