Type:
Venue:
Date:
Conceptual representation routinely draws upon modality-specific information: a bear is fast and noisy; a lemon is yellow and rounded. Understanding where and when this information converges in the brain is fundamental to a complete understanding of semantic knowledge. Neuroimaging studies have identified several ‘convergence zones’ which handle semantic information from different modalities, but further data on when information converges is vital to illuminate the role these regions play in comprehension. In this magnetoencephalography (MEG) experiment, multiple linear regression analysis was carried out in order to investigate the spatiotemporal dynamics of feature information (e.g, color, shape, sound, motion) encoding during word reading. A spatial conjunction analysis identified the angular gyrus, superior temporal sulcus, superior parietal lobule and fusiform gyrus as convergence zones, consistent with previous findings. However, our findings suggest that while the fusiform gyrus and superior temporal sulcus fit the profile of a semantic hub, tackling multi-feature information simultaneously and relatively early in word comprehension, the angular gyrus and superior parietal lobule process different features at different times throughout a two-second window.