Type:
Venue:
Date:
The temporal dynamics of spoken word recognition are highly debated. While some studies suggest serial processing of sublexical and lexico-semantic information (e.g., Kocagoncu et al., 2017), others reported parallel processing since early stages (e.g., Lewis & Poeppel, 2014). The current study employed multiple linear regression to predict MEG-evoked responses in 20 native Italian speakers during the semantic judgment of 438 Italian spoken words. MEG responses were modeled around the uniqueness point (UP) based on four predictors: Lexical Neighborhood size (LN), word Frequency, Vision (a semantic regressor generated from the rating of several visual features in the database of Binder et al., 2016) and participants’ response as a covariate of no interest. Sensor-level time course of event-related regressor coefficient (ERRC) showed LN-related activity from 350 ms before the UP to 240 ms after the UP, probably indicating lexical competition between similar wordforms. Frequency effects peaked at ~200 ms after the UP. The vision-related semantic regressor peaked at ~400 ms after the UP and remained significant. Source-level maps of ERRCs localized the LN effects in the bilateral supramarginal gyri and the superior temporal sulcus (STS). Frequency effects mapped mainly around the left STS and the inferior frontal gyrus (IFG). Early stages (~400 ms after the UP) of the Vision-related activity involved the bilateral IFG, the left STS, and the left ventral occipitotemporal cortex. Our results attested different processing stages of information associated with spoken words, in support for more serial information processing during spoken word recognition.
Kocagoncu, E., Clarke, A., Devereux, B. J., & Tyler, L. K. (2017). Decoding the cortical dynamics of sound-meaning mapping. Journal of Neuroscience, 37(5), 1312-1319.
Lewis, G., & Poeppel, D. (2014). The role of visual representations during the lexical access of spoken words. Brain and language, 134, 1-10.