Toggle light / dark theme

Recent theoretical breakthroughs have settled two long-standing questions about the viability of simulating quantum systems on future quantum computers, overcoming challenges from complexity analyses to enable more advanced algorithms. Featured in two publications, the work by a quantum team at Los Alamos National Laboratory shows that physical properties of quantum systems allow for faster simulation techniques.

“Algorithms based on this work will be needed for the first full-scale demonstration of quantum simulations on quantum computers,” said Rolando Somma, a quantum theorist at Los Alamos and coauthor on the two papers.

As the development of quantum computers increases, “use cases will grow exponentially. We’re at a turning point,” Uttley told Investor’s Business Daily.

Big Developers Of Quantum Computing

Quantum computing is on target to be one of the greatest scientific breakthroughs of the 21st Century. Businesses, governments, institutions and universities have made it a high priority, with billions of dollars invested globally.

Most physicists and philosophers now agree that time is emergent while Digital Presentism denotes: Time emerges from complex qualia computing at the level of observer experiential reality. Time emerges from experiential data, it’s an epiphenomenon of consciousness. From moment to moment, you are co-writing your own story, co-producing your own “participatory reality” — your stream of consciousness is not subject to some kind of deterministic “script.” You are entitled to degrees of freedom. If we are to create high fidelity first-person simulated realities that also may be part of intersubjectivity-based Metaverse, then D-Theory of Time gives us a clear-cut guiding principle for doing just that.

Here’s Consciousness: Evolution of the Mind (2021) documentary, Part III: CONSCIOUSNESS & TIME #consciousness #evolution #mind #time #DTheoryofTime #DigitalPresentism #CyberneticTheoryofMind


Watch the full documentary on Vimeo on demand: https://vimeo.com/ondemand/339083

*Based on recent book The Syntellect Hypothesis: Five Paradigms of the Mind’s Evolution (2020) by evolutionary cyberneticist Alex M. Vikoulov, available as eBook, paperback, hardcover, and audiobook on Amazon: https://www.amazon.com/Syntellect-Hypothesis-Paradigms-Minds-Evolution/dp/1733426140

To us humans, to be alive is to perceive the flow of time. Our perception of time is linear – we remember the past, we live in the present and we look forward to the future.

In systemizing consciousness studies some recent progress has been made, but the temporal dimension of consciousness, notably the D-Theory of Time might be at least as essential to our understanding of what we call human consciousness.

Our experience of time can be understood as a fractal dimension, not even a half dimension – we are subjected to our species-specific algorithmic sense of time flow. What’s necessary for completion of quantum information processing, though, is a collapse of possibilities – “many worlds” collapsing into an observer’s temporal singularity, i.e., the present moment which happens approximately every 1/10 of a second. Between conscious moments lie incredibly vast and “eternally long” potentialities of something happening. But rest assured, you will experience a sequence of those “digital” moments which gives you a sense of subjective reality.

Is time fundamental or emergent? How does time exist, if at all? How can we update the current epistemic status of temporal ontology? Digital Presentism: D-Theory of Time outlines a new theory of time, for to understand our experiential reality and consciousness, we need to understand TIME.

Consciousness and time are intimately interwoven. Time is change (between static 3D frames), 4th dimension. The flow of time is a rate of change, computation, and conscious awareness is a stream of realized probabilistic outcomes.

A new study suggests that systems governed by quantum mechanics do not show an exclusively linear evolution in time: in this way, they are sometimes able to unfold into the past and into the future simultaneously.

An international group of physicists concludes in a recent research published in the journal Communications Physics that quantum systems that evolve in one direction or another in time can also be found evolving in unison along both directions. This property shown by quantum systems in certain contexts breaks with the classical temporal conception, in which it is only possible to move forward or backward in time.

The work, carried out by scientists from the universities of Bristol (United Kingdom), Vienna (Austria), the Balearic Islands (Spain) and the Institute of Quantum Optics and Quantum Information (IQOQI-Vienna), shows that the limit between the time that going back and forth can be blurred in quantum mechanics. According to a press release from the University of Bristol, the new study forces us to rethink how the flow of time manifests itself in contexts in which quantum laws play a fundamental role.

To handle this, people have trained neural networks on regions where we have more complete weather data. Once trained, the system could be fed partial data and infer what the rest was likely to be. For example, the trained system can create a likely weather radar map using things like satellite cloud images and data on lightning strikes.

This is exactly the sort of thing that neural networks do well with: recognizing patterns and inferring correlations.

What drew the Rigetti team’s attention is the fact that neural networks also map well onto quantum processors. In a typical neural network, a layer of “neurons” performs operations before forwarding its results to the next layer. The network “learns” by altering the strength of the connections among units in different layers. On a quantum processor, each qubit can perform the equivalent of an operation. The qubits also share connections among themselves, and the strength of the connection can be adjusted. So, it’s possible to implement and train a neural network on a quantum processor.

Graphene consists of a planar structure, with carbon atoms connected in a hexagonal shape that resembles a beehive. When graphene is reduced to several nanometers (nm) in size, it becomes a graphene quantum dot that exhibits fluorescent and semiconductor properties. Graphene quantum dots can be used in various applications as a novel material, including display screens, solar cells, secondary batteries, bioimaging, lighting, photocatalysis, and sensors. Interest in graphene quantum dots is growing, because recent research has demonstrated that controlling the proportion of heteroatoms (such as nitrogen, sulfur, and phosphorous) within the carbon structures of certain materials enhances their optical, electrical, and catalytic properties.

Graphene consists of a planar structure, with carbon atoms connected in a hexagonal shape that resembles a beehive. When graphene is reduced to several nanometers (nm) in size, it becomes a graphene quantum dot that exhibits fluorescent and semiconductor properties. Graphene quantum dots can be used in various applications as a novel material, including display screens, solar cells, secondary batteries, bioimaging, lighting, photocatalysis, and sensors. Interest in graphene quantum dots is growing, because recent research has demonstrated that controlling the proportion of heteroatoms (such as nitrogen, sulfur, and phosphorous) within the carbon structures of certain materials enhances their optical, electrical, and catalytic properties.

The Korea Institute of Science and Technology (KIST, President Seok-Jin Yoon) reported that the research team led by Dr. Byung-Joon Moon and Dr. Sukang Bae of the Functional Composite Materials Research Center have developed a technique to precisely control the bonding structure of single heteroatoms in the graphene quantum dot, which is a zero-dimensional carbon nanomaterial, through simple chemical reaction control; and that they identified the relevant reaction mechanisms.

With the aim of controlling heteroatom incorporation within the graphene quantum dot, researchers have previously investigated using additives that introduce the heteroatom into the dot after the dot itself has already been synthesized. The dot then had to be purified further, so this method added several steps to the overall fabrication process. Another method that was studied involved the simultaneous use of multiple organic precursors (which are the main ingredients for dot synthesis), along with the additives that contain the heteroatom. However, these methods had significant disadvantages, including reduced crystallinity in the final product and lower overall reaction yield, since several additional purification steps had to be implemented. Furthermore, in order to obtain quantum dots with the chemical compositions desired by manufacturers, various reaction conditions, such as the proportion of additives, would have to be optimized.

When it comes to quarks, those of the third generation (the top and bottom) are certainly the most fascinating and intriguing. Metaphorically, we would classify their social life as quite secluded, as they do not mix much with their relatives of the first and second generation. However, as the proper aristocrats of the particle physics world, they enjoy privileged and intense interactions with the Higgs field; it is the intensity of this interaction that eventually determines things like the quantum stability of our Universe. Their social life may also have a dark side, as they could be involved in interactions with dark matter.

This special status of third-generation quarks makes them key players in the search for phenomena not foreseen by the Standard Model. A new result released by the ATLAS Collaboration focuses on models of new phenomena that predict an enhanced yield of collision events with bottom quarks and invisible particles. A second new ATLAS search considers the possible presence of added tau leptons. Together, these results set strong constraints on the production of partners of the b-quarks and of possible dark-matter particles.