Tags

Type your tag names separated by a space and hit enter

Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.
J Exp Psychol Hum Percept Perform. 2011 Oct; 37(5):1554-68.JE

Abstract

We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., /dɔg/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task.

Authors+Show Affiliations

Department of Experimental Psychology, University of Oxford, South Parks Road, Oxford OX1 3UD, United Kingdom. yi-chuan.chen@psy.ox.ac.ukNo affiliation info available

Pub Type(s)

Journal Article
Research Support, Non-U.S. Gov't

Language

eng

PubMed ID

21688942

Citation

Chen, Yi-Chuan, and Charles Spence. "Crossmodal Semantic Priming By Naturalistic Sounds and Spoken Words Enhances Visual Sensitivity." Journal of Experimental Psychology. Human Perception and Performance, vol. 37, no. 5, 2011, pp. 1554-68.
Chen YC, Spence C. Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity. J Exp Psychol Hum Percept Perform. 2011;37(5):1554-68.
Chen, Y. C., & Spence, C. (2011). Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity. Journal of Experimental Psychology. Human Perception and Performance, 37(5), 1554-68. https://doi.org/10.1037/a0024329
Chen YC, Spence C. Crossmodal Semantic Priming By Naturalistic Sounds and Spoken Words Enhances Visual Sensitivity. J Exp Psychol Hum Percept Perform. 2011;37(5):1554-68. PubMed PMID: 21688942.
* Article titles in AMA citation format should be in sentence-case
TY - JOUR T1 - Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity. AU - Chen,Yi-Chuan, AU - Spence,Charles, PY - 2011/6/22/entrez PY - 2011/6/22/pubmed PY - 2012/2/22/medline SP - 1554 EP - 68 JF - Journal of experimental psychology. Human perception and performance JO - J Exp Psychol Hum Percept Perform VL - 37 IS - 5 N2 - We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., /dɔg/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task. SN - 1939-1277 UR - https://www.unboundmedicine.com/medline/citation/21688942/Crossmodal_semantic_priming_by_naturalistic_sounds_and_spoken_words_enhances_visual_sensitivity_ L2 - http://content.apa.org/journals/xhp/37/5/1554 DB - PRIME DP - Unbound Medicine ER -