Download the Free Unbound MEDLINE PubMed App to your smartphone or tablet.
Available for iPhone, iPad, iPod touch, and Android.
Psychon Bull Rev [journal]
- Knowledge and luck. [JOURNAL ARTICLE]
- Psychon Bull Rev 2014 Jul 9.
Nearly all success is due to some mix of ability and luck. But some successes we attribute to the agent's ability, whereas others we attribute to luck. To better understand the criteria distinguishing credit from luck, we conducted a series of four studies on knowledge attributions. Knowledge is an achievement that involves reaching the truth. But many factors affecting the truth are beyond our control, and reaching the truth is often partly due to luck. Which sorts of luck are compatible with knowledge? We found that knowledge attributions are highly sensitive to lucky events that change the explanation for why a belief is true. By contrast, knowledge attributions are surprisingly insensitive to lucky events that threaten, but ultimately fail to change the explanation for why a belief is true. These results shed light on our concept of knowledge, help explain apparent inconsistencies in prior work on knowledge attributions, and constitute progress toward a general understanding of the relation between success and luck.
- The processing of speech, gesture, and action during language comprehension. [JOURNAL ARTICLE]
- Psychon Bull Rev 2014 Jul 8.
Hand gestures and speech form a single integrated system of meaning during language comprehension, but is gesture processed with speech in a unique fashion? We had subjects watch multimodal videos that presented auditory (words) and visual (gestures and actions on objects) information. Half of the subjects related the audio information to a written prime presented before the video, and the other half related the visual information to the written prime. For half of the multimodal video stimuli, the audio and visual information contents were congruent, and for the other half, they were incongruent. For all subjects, stimuli in which the gestures and actions were incongruent with the speech produced more errors and longer response times than did stimuli that were congruent, but this effect was less prominent for speech-action stimuli than for speech-gesture stimuli. However, subjects focusing on visual targets were more accurate when processing actions than gestures. These results suggest that although actions may be easier to process than gestures, gestures may be more tightly tied to the processing of accompanying speech.
- Paradoxes of optimal decision making: a response to Moran (2014). [JOURNAL ARTICLE]
- Psychon Bull Rev 2014 Jul 8.
- The benefits of interleaved and blocked study: Different tasks benefit from different schedules of study. [JOURNAL ARTICLE]
- Psychon Bull Rev 2014 Jul 2.
Research on how information should be studied during inductive category learning has identified both interleaving of categories and blocking by category as beneficial for learning. Previous work suggests that this mixed evidence can be reconciled by taking into account within- and between-category similarity relations. In this article, we present a new moderating factor. Across two experiments, one group of participants studied categories actively (by studying the objects without correct category assignment and actively figuring out what the category was), either interleaved or blocked. Another group studied the same categories passively (objects and correct category assignment were simultaneously provided). Results from a subsequent generalization task show that whether interleaved or blocked study results in better learning depends on whether study is active or passive. One account of these results is that different presentation sequences and tasks promote different patterns of attention to stimulus components. Passive learning and blocking promote attending to commonalities within categories, while active learning and interleaving promote attending to differences between categories.
- Letter position coding across modalities: Braille and sighted reading of sentences with jumbled words. [JOURNAL ARTICLE]
- Psychon Bull Rev 2014 Jul 1.
This article explores how letter position coding is attained during braille reading and its implications for models of word recognition. When text is presented visually, the reading process easily adjusts to the jumbling of some letters (jugde-judge), with a small cost in reading speed. Two explanations have been proposed: One relies on a general mechanism of perceptual uncertainty at the visual level, and the other focuses on the activation of an abstract level of representation (i.e., bigrams) that is shared by all orthographic codes. Thus, these explanations make differential predictions about reading in a tactile modality. In the present study, congenitally blind readers read sentences presented on a braille display that tracked the finger position. The sentences either were intact or involved letter transpositions. A parallel experiment was conducted in the visual modality. Results revealed a substantially greater reading cost for the sentences with transposed-letter words in braille readers. In contrast with the findings with sighted readers, in which there is a cost of transpositions in the external (initial and final) letters, the reading cost in braille readers occurs serially, with a large cost for initial letter transpositions. Thus, these data suggest that the letter-position-related effects in visual word recognition are due to the characteristics of the visual stream.
- Corrigendum to "Why does picture naming take longer than word naming? The contribution of articulatory processes" [JOURNAL ARTICLE]
- Psychon Bull Rev 2014 Jul 1.
In a previous article, (Riès, Legou, Burle, Alario, & Malfait, 2012), we reported that articulatory processes contribute to the well-established finding that response latencies are longer for picture naming than for word reading. We based this conclusion on the observation that picture naming, as compared with word reading, lengthened not only the interval between stimulus onset and the initiation of lip muscle activation (premotor time), but also the interval between lip muscle activation and vocal response onset (motor time). However, on the basis of our subsequent work in this area, we believe that our original definition of premotor time (and, consequently, of motor time) was suboptimal. On a sizable number of trials, this led to the detection of lip muscle activation (as inferred from surface EMG) that was apparently unrelated to the articulation of the vocal response. Therefore, we believe it is preferable to operationalize premotor time as the interval between stimulus onset and the muscle activation that occurred closest in time to vocal response onset. After reestimating premotor times according to this new definition, we no longer found an effect of our task contrast on the motor time interval. The present article explains the caveats regarding our previous analysis.
- Skipped words and fixated words are processed differently during reading. [JOURNAL ARTICLE]
- Psychon Bull Rev 2014 Jun 28.
The purpose of this study was to investigate whether words are processed differently when they are fixated during silent reading than when they are skipped. According to a serial processing model of eye movement control (e.g., EZ Reader) skipped words are fully processed (Reichle, Rayner, Pollatsek, Behavioral and Brain Sciences, 26(04):445-476, 2003), whereas in a parallel processing model (e.g., SWIFT) skipped words do not need to be fully processed (Engbert, Nuthmann, Richter, Kliegl, Psychological Review, 112(4):777-813, 2005). Participants read 34 sentences with target words embedded in them while their eye movements were recorded. All target words were three-letter, low-frequency, and unpredictable nouns. After the reading session, participants completed a repetition priming lexical decision task with the target words from the reading session included as the repetition prime targets, with presentation of those same words during the reading task acting as the prime. When participants skipped a word during the reading session, their reaction times on the lexical decision task were significantly longer (M = 656.42 ms) than when they fixated the word (M = 614.43 ms). This result provides evidence that skipped words are sometimes not processed to the same degree as fixated words during reading.
- An ERP investigation of dichotic repetition priming with temporally overlapping stimuli. [JOURNAL ARTICLE]
- Psychon Bull Rev 2014 Jun 28.
In the present study, we used event-related potentials (ERPs) to examine the effects of prime-target repetition using a dichotic priming paradigm. Participants monitored a stream of target words in the right, attended ear for occasional animal names, and ERPs were recorded to nonanimal words that were either unrelated to or a repetition of prime words presented to the left ear. The prime words were spoken in a different voice and had a lower intensity than did the target words, and the prime word onset occurred 50 ms before target word onset. Repetition-priming effects were observed in the ERPs starting around 150 ms post-target-onset and continued to influence processing for the duration of the target stimuli. These priming effects provide further evidence in favor of parallel processing of overlapping dichotic stimuli, at least up to the level of some form of sublexical phonological representation, a likely locus for the integration of the two sources of information.
- Verbal labeling, gradual decay, and sudden death in visual short-term memory. [JOURNAL ARTICLE]
- Psychon Bull Rev 2014 Jun 26.
Zhang and Luck (Psychological Science, 20, 423-428, 2009) found that perceptual memories are lost over time via sudden death rather than gradual decay. However, they acknowledged that participants may have instead lost memory for the locations of objects. We required observers to recall only a single object. Although the paradigm eliminated the need to maintain object-location bindings, the possibility that observers would use verbal labels increased. To measure the precision of verbal labeling, we included explicit verbal-labeling and label-matching trials. We applied a model that measured the contributions of sudden death, gradual decay, and verbal labeling to recall. Our model-based evidence pointed to sudden death as the primary vehicle by which perceptual memories were lost. Crucially, however, the sudden-death hypothesis was favored only when the verbal-labeling component was included as part of the modeling. The results underscore the importance of taking into account the potential role of verbal-labeling processes in investigations of perceptual memory.
- Performance on Perceptual Word Identification is Mediated by Discrete States. [JOURNAL ARTICLE]
- Psychon Bull Rev 2014 Jun 26.
We contrast predictions from discrete-state models of all-or-none information loss with signal-detection models of graded strength for the identification of briefly flashed English words. Previous assessments have focused on whether ROC curves are straight or not, which is a test of a discrete-state model where detection leads to the highest confidence response with certainty. We along with many others argue this certainty assumption is too constraining, and, consequently, the straight-line ROC test is too stringent. Instead, we assess a core property of discrete-state models, conditional independence, where the pattern of responses depends only on which state is entered. The conditional independence property implies that confidence ratings are a mixture of detect and guess state responses, and that stimulus strength factors, the duration of the flashed word in this report, affect only the probability of entering a state and not responses conditional on a state. To assess this mixture property, 50 participants saw words presented briefly on a computer screen at three variable flash durations followed by either a two-alternative confidence ratings task or a yes-no confidence ratings task. Comparable discrete-state and signal-detection models were fit to the data for each participant and task. The discrete-state models outperformed the signal detection models for 90 % of participants in the two-alternative task and for 68 % of participants in the yes-no task. We conclude discrete-state models are viable for predicting performance across stimulus conditions in a perceptual word identification task.