Models of Human Category Learning: Do they Generalize?

Abstract

Generalization to new examples is an essential aspect of categorization. However, recent category learning research has not focused on how people generalize their category knowledge. Taking generalization to be a critical basis for evaluating formal models of category learning, we employed a ‘minimal case’ approach to begin a systematic investigation of generalization. Human participants received supervised training on a two-way artificial classification task based on two dimensions that were each perfect predictors. Learners were then asked to classify new examples sampled from the stimulus space. Most participants based their judgments on one or the other dimension. Varying the relative levels of dimension salience influenced generalization outcomes, but varying category size (2, 4, or 8 items) did not. We fit two theoretically distinct similarity-based models (ALCOVE and DIVA) to aggregate learning data and tested on the generalization set. Both models could explain important aspects of human performance, but DIVA produced a superior overall account.


Back to Table of Contents