Toggle light / dark theme

Unless you’re a physicist or an engineer, there really isn’t much reason for you to know about partial differential equations. I know. After years of poring over them in undergrad while studying mechanical engineering, I’ve never used them since in the real world.

But partial differential equations, or PDEs, are also kind of magical. They’re a category of math equations that are really good at describing change over space and time, and thus very handy for describing the physical phenomena in our universe. They can be used to model everything from planetary orbits to plate tectonics to the air turbulence that disturbs a flight, which in turn allows us to do practical things like predict seismic activity and design safe planes.

The catch is PDEs are notoriously hard to solve. And here, the meaning of “solve” is perhaps best illustrated by an example. Say you are trying to simulate air turbulence to test a new plane design. There is a known PDE called Navier-Stokes that is used to describe the motion of any fluid. “Solving” Navier-Stokes allows you to take a snapshot of the air’s motion (a.k.a. wind conditions) at any point in time and model how it will continue to move, or how it was moving before.

As progress in traditional computing slows, new forms of computing are coming to the forefront. At Penn State, a team of engineers is attempting to pioneer a type of computing that mimics the efficiency of the brain’s neural networks while exploiting the brain’s analog nature.

Modern computing is digital, made up of two states, on-off or one and zero. An analog computer, like the , has many possible states. It is the difference between flipping a light switch on or off and turning a dimmer switch to varying amounts of lighting.

Neuromorphic or brain-inspired computing has been studied for more than 40 years, according to Saptarshi Das, the team leader and Penn State assistant professor of engineering science and mechanics. What’s new is that as the limits of digital computing have been reached, the need for high-speed image processing, for instance for self-driving cars, has grown. The rise of big data, which requires types of pattern recognition for which the brain architecture is particularly well suited, is another driver in the pursuit of neuromorphic computing.

Are you a cutting-edge AI researcher looking for models with clean semantics that can represent the context-specific causal dependencies necessary for causal induction? If so, maybe you should take a look at good old-fashioned probability trees.

Probability trees may have been around for decades, but they have received little attention from the AI and ML community. Until now. “Probability trees are one of the simplest models of causal generative processes,” explains the new DeepMind paper Algorithms for Causal Reasoning in Probability Trees, which the authors say is the first to propose concrete algorithms for causal reasoning in discrete probability trees.

Humans naturally learn to reason in large part through inducing causal relationships from our observations, and we do this remarkably well, cognitive scientists say. Even when the data we perceive is sparse and limited, humans can quickly learn causal structures such as interactions between physical objects, observations of the co-occurrence frequencies between causes and effects, etc.

Tuomas Sandholm, a computer scientist at Carnegie Mellon University, is not a poker player—or much of a poker fan, in fact—but he is fascinated by the game for much the same reason as the great game theorist John von Neumann before him. Von Neumann, who died in 1957, viewed poker as the perfect model for human decision making, for finding the balance between skill and chance that accompanies our every choice. He saw poker as the ultimate strategic challenge, combining as it does not just the mathematical elements of a game like chess but the uniquely human, psychological angles that are more difficult to model precisely—a view shared years later by Sandholm in his research with artificial intelligence.

“Poker is the main benchmark and challenge program for games of imperfect information,” Sandholm told me on a warm spring afternoon in 2018, when we met in his offices in Pittsburgh. The game, it turns out, has become the gold standard for developing artificial intelligence.

Tall and thin, with wire-frame glasses and neat brow hair framing a friendly face, Sandholm is behind the creation of three computer programs designed to test their mettle against human poker players: Claudico, Libratus, and most recently, Pluribus. (When we met, Libratus was still a toddler and Pluribus didn’t yet exist.) The goal isn’t to solve poker, as such, but to create algorithms whose decision making prowess in poker’s world of imperfect information and stochastic situations—situations that are randomly determined and unable to be predicted—can then be applied to other stochastic realms, like the military, business, government, cybersecurity, even health care.

MIT looked at the original Roboat as “quarter-scale” option, with the Roboat II being half-scale; they’re slowly working up to the point of a full-scale option that can carry four to six passengers. That bigger version is already under construction in Amsterdam, but there’s no word on when it’ll be ready for testing. In the meantime, Roboat II seems like it can pretty effectively navigate Amsterdam — MIT says that it autonomously navigated the city’s canals for three hours collecting data and returned to where it left with an error margin of less than seven inches.

Going forward, the MIT team expects to keep improving the Roboat’s algorithms to make it better able to deal with the challenges a boat might find, like disturbances from currents and waves. They’re also working to make it more capable of identifying and “understanding” objects it comes across so it can better deal with the environment it’s in. Everything the half-scale Roboat II learns will naturally be applied to the full-scale version that’s being worked on now. There’s no word on when we might see that bigger Roboat out in the waters, though.

Moving from one-algorithm to one-brain is one of the biggest open challenges in AI. A one-brain AI would still not be a true intelligence, only a better general-purpose AI—Legg’s multi-tool. But whether they’re shooting for AGI or not, researchers agree that today’s systems need to be made more general-purpose, and for those who do have AGI as the goal, a general-purpose AI is a necessary first step.

The second law of thermodynamics delineates an asymmetry in how physical systems evolve over time, known as the arrow of time. In macroscopic systems, this asymmetry has a clear direction (e.g., one can easily notice if a video showing a system’s evolution over time is being played normally or backward).

In the microscopic world, however, this direction is not always apparent. In fact, fluctuations in microscopic systems can lead to clear violations of the , causing the arrow of to become blurry and less defined. As a result, when watching a video of a microscopic process, it can be difficult, if not impossible, to determine whether it is being played normally or backwards.

Researchers at University of Maryland developed a that can infer the direction of the thermodynamic arrow of time in both macroscopic and microscopic processes. This algorithm, presented in a paper published in Nature Physics, could ultimately help to uncover new physical principles related to thermodynamics.