Belief Propagation and Locally Bayesian Learning

Abstract

Highlighting, a conditioning effect, consists of both primacy-like and recency-like effects in human subjects. This combination of effects are notoriously difficult for Bayesian models to produce. An approximation to probabilistic inference, Locally Bayesian learning (LBL), can predict highlighting by partitioning the model into regions during learning and passing messages between these regions. While the approximation matches behavior in this task, it is unclear how LBL compares to other approximations used in Bayesian models, and what behaviors this approximations will predict in other paradigms. Our contribution is to show LBL is closely related to the statistical algorithms of Assumed Density Filtering (ADF), which simplifies calculations by assuming independence, and belief propagation, which identifies how to make these calculations through message passing. We propose that people use ADF to learn and show how this model can produce highlighting behavior. In addition, we demonstrate how the degrees of approximation used in LBL and ADF cause the models to make very different predictions in a proposed experimental design.


Back to Thursday Posters