What's big and fluffy but can't be seen? Selective unimodal processing of bimodal property words

Abstract

Recent work has shown that perceptual and conceptual processing share a common, modality-specific neural substrate and appear to share the same attentional mechanisms. However, this work has been largely limited to the conceptual processing of unimodal properties (i.e., that involve information from only one sensory modality) even though most perceptual properties are actually multimodal (i.e., involve information from more than one sensory modality). In two experiments, we investigate whether the conceptual processing of bimodal properties (e.g., fluffy, jagged) requires representation on both modalities or if it is instead possible for individual modalities to carry the representational burden. Results show that bimodal properties must be processed on both component modalities when attentional control is governed by incoming stimuli (i.e., exogenously), but a “quick and dirty” unimodal representation can suffice when selective conscious (i.e. endogenous) attention has time to suppress the non-target modality. We discuss these findings with reference to embodied views of cognition.


Back to Friday Papers