How efficiently do we combine information across facial features when recognizing a face? Previous studies have suggested that the perception of a face is not simply the result of an independent analysis of individual facial features, but instead involves a coding of the relationships amongst features. This additional coding of the relationships amongst features is thought to enhance our ability to recognize a face. In our experiments, we tested whether an observers ability to recognize a face is in fact better than what one would expect from their ability to recognize the individual facial features in isolation. We tested this by using a psychophysical summation-at-threshold technique that has been used extensively to measure how efficiently observers integrate information across spatial locations and spatial frequencies. Surprisingly, we found that observers integrated information across facial features less efficiently than would be predicted by their ability to recognize the individual parts.