Tags

Type your tag names separated by a space and hit enter

Cross-modal interactions during perception of audiovisual speech and nonspeech signals: an fMRI study.
J Cogn Neurosci. 2011 Jan; 23(1):221-37.JC

Abstract

During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream--prior to its fusion with auditory phonological features [Hertrich, I., Mathiak, K., Lutzenberger, W., & Ackermann, H. Time course of early audiovisual interactions during speech and non-speech central-auditory processing: An MEG study. Journal of Cognitive Neuroscience, 21, 259-274, 2009]. Using functional magnetic resonance imaging, the present follow-up study aims to further elucidate the topographic distribution of visual-phonological operations and audiovisual (AV) interactions during speech perception. Ambiguous acoustic syllables--disambiguated to /pa/ or /ta/ by the visual channel (speaking face)--served as test materials, concomitant with various control conditions (nonspeech AV signals, visual-only and acoustic-only speech, and nonspeech stimuli). (i) Visual speech yielded an AV-subadditive activation of primary auditory cortex and the anterior superior temporal gyrus (STG), whereas the posterior STG responded both to speech and nonspeech motion. (ii) The inferior frontal and the fusiform gyrus of the right hemisphere showed a strong phonetic/phonological impact (differential effects of visual /pa/ vs. /ta/) upon hemodynamic activation during presentation of speaking faces. Taken together with the previous MEG data, these results point at a dual-pathway model of visual speech information processing: On the one hand, access to the auditory system via the anterior supratemporal “what" path may give rise to direct activation of "auditory objects." On the other hand, visual speech information seems to be represented in a right-hemisphere visual working memory, providing a potential basis for later interactions with auditory information such as the McGurk effect.

Authors+Show Affiliations

Department of General Neurology, University of Tübingen, Tübingen, Germany. ingo.hertrich@uni-tuebingen.deNo affiliation info availableNo affiliation info available

Pub Type(s)

Journal Article
Research Support, Non-U.S. Gov't

Language

eng

PubMed ID

20044895

Citation

Hertrich, Ingo, et al. "Cross-modal Interactions During Perception of Audiovisual Speech and Nonspeech Signals: an fMRI Study." Journal of Cognitive Neuroscience, vol. 23, no. 1, 2011, pp. 221-37.
Hertrich I, Dietrich S, Ackermann H. Cross-modal interactions during perception of audiovisual speech and nonspeech signals: an fMRI study. J Cogn Neurosci. 2011;23(1):221-37.
Hertrich, I., Dietrich, S., & Ackermann, H. (2011). Cross-modal interactions during perception of audiovisual speech and nonspeech signals: an fMRI study. Journal of Cognitive Neuroscience, 23(1), 221-37. https://doi.org/10.1162/jocn.2010.21421
Hertrich I, Dietrich S, Ackermann H. Cross-modal Interactions During Perception of Audiovisual Speech and Nonspeech Signals: an fMRI Study. J Cogn Neurosci. 2011;23(1):221-37. PubMed PMID: 20044895.
* Article titles in AMA citation format should be in sentence-case
TY - JOUR T1 - Cross-modal interactions during perception of audiovisual speech and nonspeech signals: an fMRI study. AU - Hertrich,Ingo, AU - Dietrich,Susanne, AU - Ackermann,Hermann, PY - 2010/1/5/entrez PY - 2010/1/5/pubmed PY - 2011/1/19/medline SP - 221 EP - 37 JF - Journal of cognitive neuroscience JO - J Cogn Neurosci VL - 23 IS - 1 N2 - During speech communication, visual information may interact with the auditory system at various processing stages. Most noteworthy, recent magnetoencephalography (MEG) data provided first evidence for early and preattentive phonetic/phonological encoding of the visual data stream--prior to its fusion with auditory phonological features [Hertrich, I., Mathiak, K., Lutzenberger, W., & Ackermann, H. Time course of early audiovisual interactions during speech and non-speech central-auditory processing: An MEG study. Journal of Cognitive Neuroscience, 21, 259-274, 2009]. Using functional magnetic resonance imaging, the present follow-up study aims to further elucidate the topographic distribution of visual-phonological operations and audiovisual (AV) interactions during speech perception. Ambiguous acoustic syllables--disambiguated to /pa/ or /ta/ by the visual channel (speaking face)--served as test materials, concomitant with various control conditions (nonspeech AV signals, visual-only and acoustic-only speech, and nonspeech stimuli). (i) Visual speech yielded an AV-subadditive activation of primary auditory cortex and the anterior superior temporal gyrus (STG), whereas the posterior STG responded both to speech and nonspeech motion. (ii) The inferior frontal and the fusiform gyrus of the right hemisphere showed a strong phonetic/phonological impact (differential effects of visual /pa/ vs. /ta/) upon hemodynamic activation during presentation of speaking faces. Taken together with the previous MEG data, these results point at a dual-pathway model of visual speech information processing: On the one hand, access to the auditory system via the anterior supratemporal “what" path may give rise to direct activation of "auditory objects." On the other hand, visual speech information seems to be represented in a right-hemisphere visual working memory, providing a potential basis for later interactions with auditory information such as the McGurk effect. SN - 1530-8898 UR - https://www.unboundmedicine.com/medline/citation/20044895/Cross_modal_interactions_during_perception_of_audiovisual_speech_and_nonspeech_signals:_an_fMRI_study_ L2 - https://www.mitpressjournals.org/doi/10.1162/jocn.2010.21421?url_ver=Z39.88-2003&rfr_id=ori:rid:crossref.org&rfr_dat=cr_pub=pubmed DB - PRIME DP - Unbound Medicine ER -