Psychology Wiki
Advertisement

Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Cognitive Psychology: Attention · Decision making · Learning · Judgement · Memory · Motivation · Perception · Reasoning · Thinking  - Cognitive processes Cognition - Outline Index


Attenuation theory is a model of selective attention proposed by Anne Treisman, and can be seen as a revisal of Donald Broadbent's Filter model. Treisman proposed attenuation theory as a means to explain how unattended stimuli sometimes came to be processed in a more rigorous manner than what Broadbent's filter model could account for.[1] As a result, attenuation theory added layers of sophistication to Broadbent's original idea of how selective attention might operate: claiming that instead of a filter which barred unattended inputs from ever entering awareness, it was a process of attenuation.[2] Thus, the attenuation of unattended stimuli would make it difficult, but not impossible to extract meaningful content from irrelevant inputs, so long as stimuli still possessed sufficient "strength" after attenuation to make it through a hierarchical analyzation process.[2]

Brief overview and previous research[]

Selective attention theories are aimed at explaining why and how individuals tend to process only certain parts of the world surrounding them, while ignoring others. Given that sensory information is constantly besieging us from the five sensory modalities, it was of interest to not only pinpoint where selection of attention took place, but also explain how we prioritize and process sensory inputs.[3] Early theories of attention such as those proposed by Broadbent and Treisman took a bottleneck perspective.[2][4] That is, they inferred that it was impossible to attend to all the sensory information available at any one time due to limited processing capacity. As a result of this limited capacity to process sensory information, there was believed to be a filter that would prevent overload by reducing the amount of information passed on for processing.[5]

Methodology[]

Early research came from an era primarily focused upon audition and explaining phenomena such as the cocktail party effect.[6] From this stemmed interest about how we can pick and choose to attend to certain sounds in our surroundings, and at a deeper level, how the processing of attended speech signals differ from those not attended to.[7] Auditory attention is often described as the selection of a channel, message, ear, stimulus, or in the more general phrasing used by Treisman, the "selection between inputs".[8] As audition became the preferred way of examining selective attention, so too did the testing procedures of dichotic listening and shadowing.[6]

Dichotic Listening[]

Dichotic listening is an experimental procedure used to demonstrate the selective filtering of auditory inputs, and was primarily utilized by Broadbent.[4] In a dichotic listening task, participants would be asked to wear a set of headphones and attend to information presented to both ears (two channels), or a single ear (one channel) while disregarding anything presented in the opposite channel. Upon completion of a listening task, participants would then be asked to recall any details noticed about the unattended channel.[9]

Shadowing[]

Shadowing can be seen as an elaboration upon dichotic listening. In shadowing, participants go through largely the same process, only this time they are tasked with repeating aloud information heard in the attended ear as it is being presented. This recitation of information is carried out so that the experimenters can verify participants are attending to the correct channel, and the number of words perceived (recited) correctly can be scored for later use as a dependent variable.[2] Due to its live rehearsal characteristic, shadowing is a more versatile testing procedure because manipulations to channels and their immediate results can be witnessed in real time.[10] It is also favored for being more accurate since shadowing is less dependent upon participants' ability to recall words heard correctly.[10]

Broadbent's Filter Model as a stepping stone[]

File:Broadbent Filter Model.jpg

Information processing model of Broadbent's Filter

Donald Broabent's filter model is the earliest bottleneck theory of attention and served as a foundation for which Anne Treisman would later build her model of attenuation upon.[9] Broadbent proposed the idea that the mind could only work with so much sensory input at any given time, and as a result, there must be a filter that allows us to selectively attend to things while blocking others out. It was posited that this filter preceded pattern recognition of stimuli, and that attention dictated what information reached the pattern recognition stage by controlling whether or not inputs were filtered out.[4]

The first stage of the filtration process extracts physical properties for all stimuli in parallel manner.[9] The second stage was claimed to be of limited capacity, and so this is where the selective filter was believed to reside in order to protect from a sensory processing overload.[9] Based upon the physical properties extracted at the initial stage, the filter would allow only those stimuli possessing certain criterion features (e.g., pitch, loudness, location) to pass through. According to Broadbent, any information not being attended to would be filtered out, and should be processed only insofar as the physical qualities necessitated by the filter.[4] Since selection was sensitive to physical properties alone, this was thought to be the reason why people possessed so little knowledge regarding the contents of an unattended message.[9] All higher level processing, such as the extraction of meaning, happens post-filter. Thus, information on the unattended channel should not be comprehended. As a consequence, events such as hearing one’s own name when not paying attention should be an impossibility since this information should be filtered out before you can process its meaning.

Criticisms leading to a theory of attenuation[]

As noted above, the filter model of attention runs into difficulty when attempting to explain how it is that we come to extract meaning from an event that we should be otherwise unaware of. For this reason, and as illustrated by the examples below, Treisman proposed attenuation theory as a means to explain how unattended stimuli sometimes came to be processed in a more rigorous manner than what Broadbent's filter model could account for.[1]

  • For two messages identical in content, it has been shown that by varying the time interval between the onset of the irrelevant message in-relation to the attended message, participants may notice the message duplicity.[11]
  • When participants were presented with the message “you may now stop" in the unattended ear, a significant number do so.[12]
  • In a classic demonstration of the cocktail party phenomenon, participants who had their own name presented to them via the unattended ear often remark about having heard it.[12]
  • Participants with training or practice can more effectively perceive content from the unattended channel while attending to another.[12][13]
  • Semantic processing of unattended stimuli has been demonstrated by altering the contextual relevance of words presented to the unattended ear. Participants heard words from the unattended ear more regularly if they were high in contextual relevance to the attended message.[14]

Attenuation Model of selective attention[]

File:Treisman Attenuation Model.jpg

Information processing model of Treisman's Attenuation theory

How Attenuation occurs[]

Treisman’s attenuation model of selective attention retains both the idea of an early selection process, as well as the mechanism by which physical cues are used as the primary point of discrimination.[3] However, unlike Broadbent’s model, the filter now attenuates unattended information instead of filtering it out completely.[1]

Attenuation can be thought of as an analogy to a volume control. Imagine you are in a noisy environment trying to have a phone conversation while a radio and television are on. The way attenuation works is by keeping what you would like to attend to at full volume (the phone), while adjusting the volume on both the television and radio (the unattended channels) to a level such that they are not off, but just barely perceptible. However, the information from the unattended channels is no longer totally lost, but is now very hard to perceive.[15]

Treisman further elaborated upon this model by introducing the concept of a threshold to explain how some words came to be heard in the unattended channel with greater frequency than others. Every word was believed to contain its own threshold that dictated the likelihood that it would be perceived after attenuation.[16]

After the initial phase of attenuation, information is then passed on to a hierarchy of analyzers that perform higher level processes to extract more meaningful content (see “Hierarchical analyzers” section below).[1] The crucial aspect of attenuation theory is that attended inputs will always undergo full processing, whereas irrelevant stimuli often lack a sufficiently low threshold to be fully analyzed, resulting in only physical qualities being remembered rather than semantics.[3] Additionally, attenuation and then subsequent stimuli processing is dictated by the current demands on the processing system. It is often the case that not enough resources are present to thoroughly process unattended inputs.[16]

Recognition Threshold[]

The operation of the recognition threshold is simple: for every possible input, an individual has a certain threshold or "amount of activation required" in order to perceive it. The lower this threshold, the more easily and likely an input is to be perceived, even after undergoing attenuation.[17]

Threshold Affectors[]

Context and priming[]

Context plays a key role in reducing the threshold required to recognize stimuli by creating an expectancy for related information.[9] Context acts by a mechanism of priming, wherein related information becomes momentarily more pertinent and accessible – lowering the threshold for recognition in the process.[3] An example of this can be seen in the statement “the recess bell rang, where the word rang and its synonyms would experience a lowered threshold due to the priming facilitated by the words that precede it.

Subjective importance[]

Words that possess subjective importance (e.g., help, fire) will have a lower threshold than those that do not.[2] Words of great individual importance, such as you own name, will have a permanently low threshold and will be able to come into awareness under almost all circumstances.[18] On the other hand, some words are more variable in their individual meaning, and rely upon their frequency of use, context, and continuity with the attended message in order to be perceived.[18]

Degree of attenuation[]

The degree of attenuation can change in relation to the content of the underlying message; with larger amounts of attenuation taking place for incoherent messages that possess little benefit to the person hearing them.[1] Incoherent messages receive the greatest amounts of attenuation because any interference they might exhibit upon the attended message would be more detrimental than that of comprehensible, or complimentary information.[1] The level of attenuation can have a profound impact on whether an input will be perceived or not, and can dynamically vary depending upon attentional demands.[15]

Hierarchy of Analyzers[]

The hierarchical system of analysis is one of maximal economy: while facilitating the potential for important, unexpected, or unattended stimuli to be perceived, it ensures that those messages sufficiently attenuated do not get through much more than the earliest stages of analysis, preventing an overburden on sensory processing capacity.[2] If attentional demands (and subsequent processing demands) are low, full hierarchy processing takes place. If demands are high, attenuation becomes more aggressive, and only allows important or relevant information from the unattended message to be processed.[1] The hierarchical analyzation process is characterized by a serial nature, yielding a unique result for each word or piece of data analyzed.[18] Attenuated information passes through all the analyzers only if the threshold has been lowered in their favor, if not, information only passes insofar as its threshold allows.[18]

The nervous system sequentially analyzes an input, starting with the general physical features such as pitch and loudness, followed by identifications of words and meaning (e.g., syllables, words, grammar and semantics).[8] The hierarchical process also serves an essential purpose if inputs are identical in terms of voice, amplitude, and spatial cues. Should all of these physical characteristics be identical between messages, then attenuation can not effectively take place at an early level based on these properties. Instead, attenuation will occur during the identification of words and meaning, and this is where the capacity to handle information can be scarce.[8]

Evidence for attenuation theory[]

Following messages to the unattended ear[]

During shadowing experiments, Treisman would present a unique steam of prosaic stimuli to each ear. Sometime during shadowing, the stimuli would then swap over to the opposite side so that the formerly shadowed message was now presented to the unattended ear. Participants would often “follow” the message over to the unattended ear before realizing their mistake,[14] especially if the stimuli had a high degree of continuity.[19] This “following of the message” illustrates how the unattended ear is still extracting some degree of information from the unattended channel, and contradicts Broadbent's filter model that would expect participants to be completely oblivious of the change in the unattended channel.[14]

Manipulating the onset of messages[]

In a series of experiments carried out by Treisman (1964), two messages identical in content would be played, and the amount of time between the onset of the irrelevant message in-relation to the shadowed message would be varied. Participants were never informed of the message duplicity, and the time lag between messages would be altered until participants remarked about the similarity. If the irrelevant message was allowed to lead, it was found that the time gap could not exceed 1.4 seconds.[1] This was believed to be a result of the irrelevant message undergoing attenuation and receiving no processing beyond the physical level. This lack of deep processing necessitates the irrelevant message be held in the sensory store before comparison to the shadowed message, making it vulnerable to decay.[1] In contrast, when the shadowed message led, the irrelevant message could lag behind it by as much as five seconds and participants could still perceive the similarity. This shows that the shadowed message is not decaying as quickly, and coincides with what attenuation theory would predict: the shadowed message receives no attenuation, undergoes full processing, and then gets passed on to working memory where it can be held for a comparatively longer duration than the unattended message in the sensory store.[1]

Variations upon this method involved using identical messages spoken in different voices (e.g., gender), or manipulating whether the message was composed of non-words to examine the effect of not being able to extract meaning. In all cases, support was found for a theory of attenuation.[1][6]

Bilingual Shadowing[]

Bilingual students were found to recognize that a message presented to the unattended channel was the same as the one being attended to, even when presented in a different language.[1] This was achieved by having participants shadow a message presented in English, while playing the same message in French to the unattended ear. Once again, this shows extraction of meaningful information from the speech signal above and beyond physical characteristics alone.[6]

Electrical shock and unattended words[]

Corteen and Dunn (1974) paired electrical shock with target words. It was found that if these words were later presented in the absence of shock, participants would respond automatically with a galvanic skin response (GSR) even when played in the unattended ear. Furthermore, GSR's were found to generalize to synonyms of unattended target words, implying that word processing was taking place at a level deeper than what Broadbent's model would predict.[20]

Event-related potentials of irrelevant stimuli[]

Von Voorhis and Hillyard (1977) used an EEG to observe event-related potentials (ERPs) of visual stimuli. Participants were asked to attend to, or disregard specific stimuli presented. Results demonstrated that when attending to visual stimuli, the amount of voltage fluctuation was greater at occipital sites for attended stimuli when compared to unattended stimuli. Voltage modulations were observed after 100ms of stimuli onset, consistent with what would be predicted by attenuation of irrelevant inputs.[21]

Effects of attentional demand on brain activity[]

In a fMRI study that examined if meaning was implicitly extracted from unattended words, or if the extraction of meaning could be avoided by simultaneously presenting distracting stimuli; it was found that when competing stimuli create sufficient attentional demand, no brain activity was observed in response to the unattended words, even when directly fixated upon.[22] These results are in keeping with what would be predicted by an attenuation style of selection and run contrary to classical late selection theory.[23]

Competing theories[]

In 1963, Deutsch & Deutsch proposed a late selection model of how selective attention operates. They proposed all stimuli get processed in full, with the crucial difference being a filter placed later in the information processing routine, just before the entrance into working memory. The late selection process supposedly operated on the semantic characteristics of a message, barring inputs from memory and subsequent awareness if they did not possess desired content.[19] According to this model, the depreciated awareness of unattended stimuli came from denial into working memory and the controlled generation of responses to it.[9] The Deutsch & Deutsch model was later revised by Norman in 1968, who added that the strength of an input was also an important factor for its selection.[24]

A criticism of both the original Deutsch & Deutsch model, as well as the revised Deustch-Norman selection model is that all stimuli, including those deemed irrelevant, are processed fully.[10] When contrast against Treisman's attenuation model, the late selection approach appears wasteful with its thorough processing of all information before selection of admittance into working memory.[18]

See also[]

References[]

  1. 1.00 1.01 1.02 1.03 1.04 1.05 1.06 1.07 1.08 1.09 1.10 1.11 DOI:10.1016/S0022-5371(64)80015-3
    This citation will be automatically completed in the next few minutes. You can jump the queue or expand by hand
  2. 2.0 2.1 2.2 2.3 2.4 2.5 DOI:10.1037/h0026890
    This citation will be automatically completed in the next few minutes. You can jump the queue or expand by hand
  3. 3.0 3.1 3.2 3.3 DOI:10.2307/1420765
    This citation will be automatically completed in the next few minutes. You can jump the queue or expand by hand
  4. 4.0 4.1 4.2 4.3 DOI:10.1037/10037-000
    This citation will be automatically completed in the next few minutes. You can jump the queue or expand by hand
  5. Friedenberg, Jay (2011). Cognitive Science: An Introduction to the Study of Mind, 98–101, Sage Publications.
  6. 6.0 6.1 6.2 6.3 Kahneman, Daniel (1973). Attention and Effort, 122–123, Prentice Hall.
  7. DOI:10.1037/h0027366
    This citation will be automatically completed in the next few minutes. You can jump the queue or expand by hand
  8. 8.0 8.1 8.2 DOI:10.2307/1420127
    This citation will be automatically completed in the next few minutes. You can jump the queue or expand by hand
  9. 9.0 9.1 9.2 9.3 9.4 9.5 9.6 DOI:10.1348/000712601162103
    This citation will be automatically completed in the next few minutes. You can jump the queue or expand by hand
  10. 10.0 10.1 10.2 Karowski, Waldemar (2006). International Encyclopedia of Ergonomics and Human Factors, Second Edition, 439, CRC Press.
  11. DOI:10.1121/1.1907229
    This citation will be automatically completed in the next few minutes. You can jump the queue or expand by hand
  12. 12.0 12.1 12.2 DOI:10.1080/17470215908416289
    This citation will be automatically completed in the next few minutes. You can jump the queue or expand by hand
  13. DOI:10.1080/14640747408400426
    This citation will be automatically completed in the next few minutes. You can jump the queue or expand by hand
  14. 14.0 14.1 14.2 DOI:10.1080/17470216008416732
    This citation will be automatically completed in the next few minutes. You can jump the queue or expand by hand
  15. 15.0 15.1 Galotti, Kathleen (2009). Cognitive Psychology: In and Out of the Laboratory, 105–107, Nelson College Indigenous.
  16. 16.0 16.1 DOI:10.1037/h0027242
    This citation will be automatically completed in the next few minutes. You can jump the queue or expand by hand
  17. Cowan, Nelson (1997). Attention and Memory: An Integrated Framework, 137–139, Oxford University Press.
  18. 18.0 18.1 18.2 18.3 18.4 Treisman, Anne (1964). Selective Attention in Man. British Medical Bulletin 20 (1): 12–16.
  19. 19.0 19.1 DOI:10.1037/h0039515
    This citation will be automatically completed in the next few minutes. You can jump the queue or expand by hand
  20. Corteen, R. S., Dunn, D. (1974). Shock-associated words in a non attended message: A test for momentary awareness.. Journal of Experimental Psychology 102: 1143–1144.
  21. Van Voorhis, S. T., Hillyard, S. A. (1977). Visual evoked potentials and selective attention to points in space.. Perception & Psychophysics (22): 54–62.
  22. Rees, G, Russell, C., Frith, c., & Driver, J. (1999). Inattentional blindness versus inattentional amnesia for fixated but ignored. words. Science 286 (5449): 2504–2507.
  23. Lavie, N (2000). Selective attention and cognitive control: Dissociating attentional functions through different types of load., 175–197, Cambridge, MA: MIT Press.
  24. DOI:10.1037/h0026699
    This citation will be automatically completed in the next few minutes. You can jump the queue or expand by hand
Advertisement