Toggle light / dark theme

The Big Bang might never have existed as many cosmologists start to question the origin of the Universe. The Big Bang is a point in time defined by a mathematical extrapolation. The Big Bang theory tells us that something has to have changed around 13.7 billion years ago. So, there is no “point” where the Big Bang was, it was always an extended volume of space, according to the Eternal Inflation model. In light of Digital Physics, as an alternative view, it must have been the Digital Big Bang with the lowest possible entropy in the Universe — 1 bit of information — a coordinate in the vast information matrix. If you were to ask what happened before the first observer and the first moments after the Big Bang, the answer might surprise you with its straightforwardness: We extrapolate backwards in time and that virtual model becomes “real” in our minds as if we were witnessing the birth of the Universe.

In his theoretical work, Andrew Strominger of Harvard University speculates that the Alpha Point (the Big Bang) and the Omega Point form the so-called ‘Causal Diamond’ of the conscious observer where the Alpha Point has only 1 bit of entropy as opposed to the maximal entropy of some incredibly gigantic amount of bits at the Omega Point. While suggesting that we are part of the conscious Universe and time is holographic in nature, Strominger places the origin of the Universe in the infinite ultra-intelligent future, the Omega Singularity, rather than the Big Bang.

The Universe is not what textbook physics tells us except that we perceive it in this way — our instruments and measurement devices are simply extensions of our senses, after all. Reality is not what it seems. Deep down it’s pure information — waves of potentiality — and consciousness orchestrating it all. The Big Bang theory, drawing a lot of criticism as of late, uses a starting assumption of the “Universe from nothing,” (a proverbial miracle, a ‘quantum fluctuation’ christened by scientists), or the initial Cosmological Singularity. But aside from this highly improbable happenstance, we can just as well operate from a different set of assumptions and place the initial Cosmological Singularity at the Omega Point — the transcendental attractor, the Source, or the omniversal holographic projector of all possible timelines.

A team of scientists at Freie Universität Berlin has developed an artificial intelligence (AI) method for calculating the ground state of the Schrödinger equation in quantum chemistry. The goal of quantum chemistry is to predict chemical and physical properties of molecules based solely on the arrangement of their atoms in space, avoiding the need for resource-intensive and time-consuming laboratory experiments. In principle, this can be achieved by solving the Schrödinger equation, but in practice this is extremely difficult.

Up to now, it has been impossible to find an exact solution for arbitrary molecules that can be efficiently computed. But the team at Freie Universität has developed a deep learning method that can achieve an unprecedented combination of accuracy and computational efficiency. AI has transformed many technological and scientific areas, from computer vision to materials science. “We believe that our approach may significantly impact the future of quantum ,” says Professor Frank Noé, who led the team effort. The results were published in the reputed journal Nature Chemistry.

Central to both quantum chemistry and the Schrödinger equation is the —a mathematical object that completely specifies the behavior of the electrons in a molecule. The wave function is a high-dimensional entity, and it is therefore extremely difficult to capture all the nuances that encode how the individual electrons affect each other. Many methods of quantum chemistry in fact give up on expressing the wave function altogether, instead attempting only to determine the energy of a given molecule. This however requires approximations to be made, limiting the prediction quality of such methods.

In some ways, learning to program a computer is similar to learning a new language. It requires learning new symbols and terms, which must be organized correctly to instruct the computer what to do. The computer code must also be clear enough that other programmers can read and understand it.

In spite of those similarities, MIT neuroscientists have found that reading computer code does not activate the regions of the brain that are involved in language processing.

Instead, it activates a distributed network called the multiple demand network, which is also recruited for complex cognitive tasks such as solving math problems or crossword puzzles.

This week, I had some amazing discussions with Navajo Nation Math Circle leaders — Dave Auckly and Henry Fowler. The idea of starting a math circle on Navajo land was initially brought up by a wonderful math educator and mathematician raised in Kazakhstan, Tatiana Shubin. Here is a small tribute to their efforts:


Project activities were launched in the Fall of 2012. A team of distinguished mathematicians from all over the US, as well as local teachers and community members, work together to run the outreach. Navajo Nation Math Circles present math in the context of Navajo culture, helping students develop their identity as true Navajo mathematicians. “We want to find kids who would not have discovered their talents without our project, to help them realize that they can change the world,” says Fowler. Having introduced Navajo children to the joy of mathematics, the project also yielded a book, Inspiring Mathematics: Lessons from the Navajo Nation Math Circles, which contain lesson plans, puzzles and activities, and other insights for parents and teachers to embrace.

An extension of Navajo Nation Math Circles is an annual two-week Baa Hózhó summer math camp at Navajo Technical University. “Baa Hózhó” means “balance and harmony,” tying together the ideas of mathematical equilibrium with the way of life embraced by Navajo people. The summer camp is widely popular with parents and children; the older students come back as counselors, making everyone feel like one big family. It is preceded by an annual student-run math festival in local schools across the Navajo Nation, where students share their passion for mathematics with families and friends.

Fowler’s ultimate goal is to create a Mathematical Research institute on Navajo land, where local and international researchers could exchange math ideas and study the best ways of teaching mathematics to Indigenous people, enriching worldwide mathematical sciences. Hopefully, the great strides in the Navajo Nation math education will encourage leading high-tech companies to support the rise of a new generation of diverse, talented and passionate Native American STEM professionals.

(Checks math.)


Scientists have new evidence that Earth’s many periodic mass extinctions follow a cycle of about 27 million years, connecting the five major mass extinctions with more minor ones occurring throughout Earth’s life-fostering timespan. The artificial intelligence analysis could also shift how evolutionary scientists think about the aftermath of mass extinctions.

Neuroscientists find that interpreting code activates a general-purpose brain network, but not language-processing centers.

In some ways, learning to program a computer is similar to learning a new language. It requires learning new symbols and terms, which must be organized correctly to instruct the computer what to do. The computer code must also be clear enough that other programmers can read and understand it.

In spite of those similarities, MIT neuroscientists have found that reading computer code does not activate the regions of the brain that are involved in language processing. Instead, it activates a distributed network called the multiple demand network, which is also recruited for complex cognitive tasks such as solving math problems or crossword puzzles.

“People want to know what makes someone a good programmer,” Liu said. “If we know what kind of neuro mechanisms are activated when someone is programming, we might be able to find a better training program for programmers.” By mapping the brain activity of expert computer programmers while they puzzled over code, Johns Hopkins University scientists have found the neural mechanics behind this increasingly vital skill.

Though researchers have long suspected the for computer programming would be similar to that for math or even language, this study revealed that when seasoned coders work, most happens in the network responsible for logical reasoning, though in the left brain region, which is favored by language.

“Because there are so many ways people learn programming, everything from do-it-yourself tutorials to formal courses, it’s surprising that we find such a consistent brain activation pattern across people who code,” said lead author Yun-Fei Liu, a Ph.D. student in the university’s Neuroplasticity and Development Lab. “It’s especially surprising because we know there seems to be a crucial period that usually terminates in for , but many people learn to code as adults.”

Computer programming is a novel cognitive tool that has transformed modern society. What cognitive and neural mechanisms support this skill? Here, we used functional magnetic resonance imaging to investigate two candidate brain systems: the multiple demand (MD) system, typically recruited during math, logic, problem solving, and executive tasks, and the language system, typically recruited during linguistic processing. We examined MD and language system responses to code written in Python, a text-based programming language (Experiment 1) and in ScratchJr, a graphical programming language (Experiment 2); for both, we contrasted responses to code problems with responses to content-matched sentence problems. We found that the MD system exhibited strong bilateral responses to code in both experiments, whereas the language system responded strongly to sentence problems, but weakly or not at all to code problems. Thus, the MD system supports the use of novel cognitive tools even when the input is structurally similar to natural language.

Fast spinning black holes could have features different from those predicted by general relativity.


General relativity is a profoundly complex mathematical theory, but its description of black holes is amazingly simple. A stable black hole can be described by just three properties: its mass, its electric charge, and its rotation or spin. Since black holes aren’t likely to have much charge, it really takes just two properties. If you know a black hole’s mass and spin, you know all there is to know about the black hole.

This property is often summarized by the no-hair theorem. Specifically, the theorem asserts that once matter falls into a black hole, the only characteristic that remains is mass. You could make a black hole out of a Sun’s worth of hydrogen, chairs, or those old copies of National Geographic from Grandma’s attic, and there would be no difference. Mass is mass as far as general relativity is concerned. In every case the event horizon of a black hole is perfectly smooth, with no extra features. As Jacob Bekenstein said, black holes have no hair.

But with all its predictive power, general relativity has a problem with quantum theory. This is particularly true with black holes. If the no-hair theorem is correct, the information held within an object is destroyed when it crosses the event horizon. Quantum theory says that information can never be destroyed. So the valid theory of gravity is contradicted by the valid theory of the quanta. This leads to problems such as the firewall paradox, which can’t decide whether an event horizon should be hot or cold.

Researchers have found a way to protect highly fragile quantum systems from noise, which could aid in the design and development of new quantum devices, such as ultra-powerful quantum computers.

The researchers, from the University of Cambridge, have shown that microscopic particles can remain intrinsically linked, or entangled, over long distances even if there are random disruptions between them. Using the mathematics of quantum theory, they discovered a simple setup where entangled particles can be prepared and stabilized even in the presence of noise by taking advantage of a previously unknown symmetry in .

Their results, reported in the journal Physical Review Letters, open a new window into the mysterious quantum world that could revolutionize future technology by preserving in , which is the single biggest hurdle for developing such technology. Harnessing this capability will be at the heart of ultrafast quantum computers.