Toggle light / dark theme

Algorithmic Intelligence Has Gotten So Good, It’s Easy To Forget It’s Artificial Artificial intelligence becomes hard to ignore when it starts taking over tasks that used to require human judgment — such as winnowing job applications or prioritizing stories in a news feed.

Quantum supremacy sounds like something out of a Marvel movie. But for scientists working at the forefront of quantum computing, the hope—and hype—of this fundamentally different method of processing information is very real. Thanks to the quirky properties of quantum mechanics (here’s a nifty primer), quantum computers have the potential to massively speed up certain types of problems, particularly those that simulate nature.

Scientists are especially enthralled with the idea of marrying the quantum world with machine learning. Despite all their achievements, our silicon learning buddies remain handicapped: machine learning algorithms and traditional CPUs don’t play well, partly because the greedy algorithms tax classical computing hardware.

Add in a dose of quantum computing, however, and machine learning could potentially process complex problems beyond current abilities at a fraction of the time.

Conscious “free will” is problematic because brain mechanisms causing consciousness are unknown, measurable brain activity correlating with conscious perception apparently occurs too late for real-time conscious response, consciousness thus being considered “epiphenomenal illusion,” and determinism, i.e., our actions and the world around us seem algorithmic and inevitable. The Penrose–Hameroff theory of “orchestrated objective reduction (Orch OR)” identifies discrete conscious moments with quantum computations in microtubules inside brain neurons, e.g., 40/s in concert with gamma synchrony EEG. Microtubules organize neuronal interiors and regulate synapses. In Orch OR, microtubule quantum computations occur in integration phases in dendrites and cell bodies of integrate-and-fire brain neurons connected and synchronized by gap junctions, allowing entanglement of microtubules among many neurons. Quantum computations in entangled microtubules terminate by Penrose “objective reduction (OR),” a proposal for quantum state reduction and conscious moments linked to fundamental spacetime geometry. Each OR reduction selects microtubule states which can trigger axonal firings, and control behavior. The quantum computations are “orchestrated” by synaptic inputs and memory (thus “Orch OR”). If correct, Orch OR can account for conscious causal agency, resolving problem 1. Regarding problem 2, Orch OR can cause temporal non-locality, sending quantum information backward in classical time, enabling conscious control of behavior. Three lines of evidence for brain backward time effects are presented. Regarding problem 3, Penrose OR (and Orch OR) invokes non-computable influences from information embedded in spacetime geometry, potentially avoiding algorithmic determinism. In summary, Orch OR can account for real-time conscious causal agency, avoiding the need for consciousness to be seen as epiphenomenal illusion. Orch OR can rescue conscious free will.

Keywords: microtubules, free will, consciousness, Penrose-Hameroff Orch OR, volition, quantum computing, gap junctions, gamma synchrony.

We have the sense of conscious control of our voluntary behaviors, of free will, of our mental processes exerting causal actions in the physical world. But such control is difficult to scientifically explain for three reasons:

Our deepfake problem is about to get worse: Samsung engineers have now developed realistic talking heads that can be generated from a single image, so AI can even put words in the mouth of the Mona Lisa.

The new algorithms, developed by a team from the Samsung AI Center and the Skolkovo Institute of Science and Technology, both in Moscow work best with a variety of sample images taken at different angles – but they can be quite effective with just one picture to work from, even a painting.

mona lisa talk 1024 (Egor Zakharov)

Flashback to 2 years ago…


Scientists from Maastricht University have developed a method to look into the brain of a person and read out who has spoken to him or her and what was said. With the help of neuroimaging and data mining techniques the researchers mapped the brain activity associated with the recognition of speech sounds and voices.

In their Science article “‘Who’ is Saying ‘What’? Brain-Based Decoding of Human Voice and Speech,” the four authors demonstrate that speech sounds and voices can be identified by means of a unique ‘neural fingerprint’ in the listener’s brain. In the future this new knowledge could be used to improve computer systems for automatic speech and speaker recognition.

Seven study subjects listened to three different speech sounds (the vowels /a/, /i/ and /u/), spoken by three different people, while their brain activity was mapped using neuroimaging techniques (fMRI). With the help of data mining methods the researchers developed an algorithm to translate this brain activity into unique patterns that determine the identity of a speech sound or a voice. The various acoustic characteristics of vocal cord vibrations (neural patterns) were found to determine the brain activity.

Scientists found a way to make sense of particularly chaotic events in nature.

Thanks to a new set of equations for modeling turbulence, scientists can now better predict things like how galaxies form in distant space, complex weather patterns here on Earth, and nuclear fusion. According to the research, published this Spring in the journal Physical Review Letters, turbulence may start out chaotic but then falls into a more uniform pattern that scientists can readily model and understand.

Rutgers computer scientists used artificial intelligence to control a robotic arm that provides a more efficient way to pack boxes, saving businesses time and money.

“We can achieve low-cost, automated solutions that are easily deployable. The key is to make minimal but effective hardware choices and focus on robust algorithms and software,” said the study’s senior author Kostas Bekris, an associate professor in the Department of Computer Science in the School of Arts and Sciences at Rutgers University-New Brunswick.

Bekris, Abdeslam Boularias and Jingjin Yu, both assistant professors of computer science, formed a team to deal with multiple aspects of the robot packing problem in an integrated way through hardware, 3D perception and robust motion.

The field has narrowed in the race to protect sensitive electronic information from the threat of quantum computers, which one day could render many of our current encryption methods obsolete.

As the latest step in its program to develop effective defenses, the National Institute of Standards and Technology (NIST) has winnowed the group of potential encryption tools—known as cryptographic algorithms—down to a bracket of 26. These algorithms are the ones NIST mathematicians and computer scientists consider to be the strongest candidates submitted to its Post-Quantum Cryptography Standardization project, whose goal is to create a set of standards for protecting electronic information from attack by the computers of both tomorrow and today.

“These 26 algorithms are the ones we are considering for potential standardization, and for the next 12 months we are requesting that the cryptography community focus on analyzing their performance,” said NIST mathematician Dustin Moody. “We want to get better data on how they will perform in the real world.”