Are you a cutting-edge AI researcher looking for models with clean semantics that can represent the context-specific causal dependencies necessary for causal induction? If so, maybe you should take a look at good old-fashioned probability trees.
Probability trees may have been around for decades, but they have received little attention from the AI and ML community. Until now. “Probability trees are one of the simplest models of causal generative processes,” explains the new DeepMind paper Algorithms for Causal Reasoning in Probability Trees, which the authors say is the first to propose concrete algorithms for causal reasoning in discrete probability trees.
Humans naturally learn to reason in large part through inducing causal relationships from our observations, and we do this remarkably well, cognitive scientists say. Even when the data we perceive is sparse and limited, humans can quickly learn causal structures such as interactions between physical objects, observations of the co-occurrence frequencies between causes and effects, etc.