Learning a Theory of Causality


We consider causality as a domain-general intuitive theory and ask whether this intuitive theory can be learned from co-occurrence of events. We begin by phrasing the causal Bayes nets theory of causality, and a range of alternatives, in a logical language for relational theories. This allows us to explore simultaneous inductive learning of an abstract theory of causality and a causal model for each of several causal systems. We find that the correct theory of causality can be learned relatively quickly, often becoming available before specific causal theories have been learned - an effect we term the "blessing of abstraction". We then explore the effect of providing a variety of auxiliary evidence, and find that a collection of simple "input analyzers" can help to bootstrap abstract knowledge. Together these results suggest that the most efficient route to causal knowledge may be to build in not an abstract notion of causality, but a powerful inductive learning mechanism and a variety of perceptual supports. While these results are purely computational, they have implications for cognitive development, which we explore in the conclusion.

Back to Saturday Papers