We ask observers to make judgments of the best causal functions underlying noisy test data. This method allows us to examine how people combine existing biases about causal relations with new information (the noisy data). Participants are shown n data points representing a sample of noisy data from a supposed experiment. They generate points on what they believe to be the true causal function. The presented functions vary in noise, gaps, and functional form. The method is similar to function learning studies, but minimizes the roles of learning and memory. To what degree do the participants exhibit a bias for simple linear functions? We describe a hierarchical Bayesian polynomial regression model to quantify complexity. The results show the expected bias for simplicity, but with some interesting individual differences.