Individual Differences in Explaining Noisy Data

Abstract

In science, we design our inference approaches to trade off fit to observed data (models are good that fit well) and complexity (models or explanations that fit or explain everything are bad). Here, we examine how observers balance fit and complexity by asking observers to estimate causal models for noisy data. Specifically, participants are shown a number of scatterplots that vary in the number of data points shown, the noise added to the true function and the complexity of the true function. For each set of noisy data points, participants estimate a function which best captures their guess at the causal explanation between the input and the output. A generative psychological model combining Bayesian model selection and Gaussian process regression is used to examine individual differences in biases toward simple explanations. Our results indicate that some participants prefer simple polynomial, rule-based explanations and others prefer distance-based, similarity explanations.


Back to Table of Contents