Music-related memory

Musical memory refers to the ability to remember music-related information, such as melodic content and other progressions of tones or pitches. The differences found between languistic memory and musical memory have led researchers to theorize that musical memory is encoded differently from language and may constitute an independent part of the phonological loop.

Neurological bases
Consistent with hemispheric lateralization, there is evidence to suggest that the left and right hemispheres of the brain are responsible for different components of musical memory. By studying the learning curves of patients who have had damage to either their left or right medial temporal lobes, Wilson & Saling (2008) found hemispheric differences in the contributions of the left and right medial temporal lobes in melodic memory. Ayotte, Peretz, Rousseau, Bard & Bojanowski (2000) found that those patients who had their left middle cerebral artery cut in response to an aneurysm suffered greater impairments when performing tasks of musical long term memory, than those patients who had their right middle cerebral artery cut. Thus, they concluded that the left hemisphere is mainly important for musical representation in long term memory whereas, the right is needed primarily to mediate access to this memory. Sampson and Zatorre (1991) studied patients with severe epilepsy who underwent surgery for relief as well as control subjects. They found deficits in memory recognition for text regardless of whether it was sung or spoken after a left, but not right temporal lobectomy. However, melody recognition when a tune was sung with new words (as compared to encoding) was impaired after either right OR left temporal lobectomy. Finally, after a right but not left temporal lobectomy, impairments of melody recognition occurred in the absence of lyrics. This suggests dual memory codes for musical memory, with the verbal code utilizing the left temporal lobe structures and the melodic relying on the encoding involved.

Semantic vs. episodic
Platel (2005) defined musical semantic memory as memory for pieces without memory for the temporal or spatial elements; and musical episodic memory as memory for pieces and the context in which they were learned. It was found that two distinct patterns of neural activations existed when comparing semantic and episodic components of musical memory. Controlling for processes of early auditory analysis, working memory and mental imagery, Platel found that retrieval of semantic musical memory involved activation in the right inferior and middle frontal gyri, the superior and inferior right temporal gyri, the right anterior cingulate gyrus and parietal lobe region. Also some activation in the middle and inferior frontal gyri in the left hemisphere Retrieval of episodic musical memory resulted in activation bilaterally in the middle and superior frontal gyri and the precuneus. Although bilateral activation was found there was dominance in the right hemisphere. This research suggests independence of episodic and semantic musical memory

According to gender
Gaab, Keenan & Schlaug (2003) found a difference between males and females in the processing and subsequent memory for pitch using fMRI. More specifically, males showed more lateralized activity in the anterior and posterior perisylvin regions with greater activation in the left. Males also had more cerebellar activation then females did. However, Females showed more posterior cingulate and retrosplenial cortex activation then did males. Nevertheless it was demonstrated that the behavioural performance did not differ between males and females.

Expertise
Experts have tremendous experience through practice and education in a particular field. Musical experts use some of the same strategies as do many experts in fields that require large amounts of memorization; chunking, organization and practice. For example, musical experts may organize notes into scales or create a hierarchical retrieval scheme to facilitate retrieval from long term memory. In a case study on an expert pianist, researchers Chaffin & Imreh (2002) found that a retrieval scheme was developed to guarantee that the music was recalled with ease. This expert used auditory and motor memory along with conceptual memory. Together the auditory and motor representations allow for automaticity during performance, whereas the conceptual memory is mainly used to mediate when the piece is getting off track. When studying concert soloists Chaffin and Logan (2006) reiterate that a hierarchical organization exists in memory, but also take this a step further suggesting that they actually use a mental map of the piece allowing them to keep track of the progression of the piece. Chaffin and Logan (2006) also demonstrate that there are performance cues that monitor the automatic aspects of performance and adjust them accordingly. They distinguish between basic performance cues, interpretive performance cues, and expressive performance cues. Basic cues monitor technical features, interpretive cues monitor changes made in different aspects of the piece, and expressive cues monitor the feelings of the music. These cues are developed when experts pay attention to a particular aspect during practice.

Savantism
A savant syndrome is described as a person with a low IQ but who has superior performance in one particular field. Sloboda, Hermelin and O’Connor (1985) discussed a patient, NP, who was able to memorize very complex musical pieces after hearing them on three or four times. NP’s performance exceeded that of experts with very high IQ’s. However, his performance on other memory tasks was average for a person with an IQ in his range. They used NP to suggest that high IQ is not needed for the skill of musical memorization and in fact, other factors must be influencing this performance. Miller (1987) also studied a 7 year old child who was said to be a musical savant. This child had superior short term memory for music that was found to be influenced by the attention given to the complexity of the music, the key signature, and repeated configurations within a string. Miller (1987) suggests that a savant’s ability is due to encoding the information into already existing meaningful structures in long term memory.

Child prodigies
Ruthsatz & Detterman (2003) define a prodigy as a child (younger than 10) who is able to excel at “culturally relevant” tasks to an extent that even isn’t seen often in professionals in the field. They describe a case of one particular boy who had already released two CD’s (on which he sings in 2 different languages) and was able to play several instruments by the age of 6.

Other observations that were made of this young child were that he had:


 * performed numerous concerts
 * appeared twice on national TV
 * two movie appearances
 * played highly expressive music
 * come from a family with no particular abilities in music
 * never had lessons, he had just listened to others pieces used improvisation
 * an IQ of 132 (2 standard deviations above the average)
 * an extraordinary memory in all domains

Amusia
Amusia is also known as tone deafness. Amusics primarily have deficits in processing pitch. They also have problems with musical memory, singing and timing. Amusics also cannot tell melodies apart from their rhythm or beat. However, Amusics can recognize other sounds at a normal level (i.e. lyrics, voices, and sounds from the environment). Therefore demonstrating that amusia is not due to deficits in exposure, hearing or cognition.

Effects on non-musical memory
Music has been shown to improve memory in several situations. In one study of musical effects on memory, visual cues (filmed events) were paired with background music. Later, participants who could not recall details of the scene were presented with the background music as a cue and recovered the unaccessible scene information. Other research provides support for memory of text being improved by musical training. Words presented by song were remembered significantly better than when presented by speech. Earlier research has supported for this finding, that advertising jingles that pair words with music are remembered better than words alone or spoken words with music in the background. Memory was also enhanced for pairing brands with their proper slogans if the advertising incorporated lyrics and music rather than spoken words and music in the background.

Training in music has also been shown to improve verbal memory in children and adults. Participants trained in music and participants without a musical background were tested for immediate recall of word and recall of words after 15 minute delays. Word lists were presented orally to each participant 3 times and then participants recalled as many words as they could. Even when matched for intelligence, the musically trained participants tested better than non-musically trained participants. The authors of this research suggest that musical training enhances verbal memory processing due to neuroanatomical changes in the left temporal lobe, (responsible for verbal memory) which is supported by previous research. MRI has been used to show that this region of the brain is larger in musicians than non-musicians, which may be due to changes in cortical organization contributing to improved cognitive function.

Anecdotal evidence, from an amnesic patient name CH who suffered from declarative memory deficits, was obtained supporting a preserved memory capacity for song titles. CH's unique knowledge of accordion music allowed for experimenters to test verbal and musical associations. When presented with song titles CH was able to successfully play the correct song 100% of the time, and when presented with the melody she chose the appropriate title from several distracters with a 90% success rate.

Interference
Interference occurs when information in short-term memory interferes with, or obstructs the retrieval of other information. Interference is a result of short-term memory’s limited capacity. Any additional information present at the time of comprehension has the ability to displace the target information from short-term memory. Therefore, there is potential that one’s ability to understand and remember will be compromised if one studies with the television or radio on.

While studies have reported inconsistent results with regards to music’s effect on memory, it has been demonstrated that music is able to interfere with various memory tasks. It has been demonstrated that new situations require new combinations of cognitive processing. This subsequently results in conscious attention being drawn to novel aspects of situations. Therefore, the loudness of music presentation along with other musical elements can assist in distracting one from normal responses by encouraging attentiveness to the musical information. Attention and recall have been shown to be negatively affected by the presence of a distraction. Wolfe (1983) cautions that educators and therapists should be made aware of the potential for environments with sounds occurring simultaneously from many sources (musical and non-musical), to distract and interfere with student learning.

Introversion and extroversion
Researchers Campbell and Hawley (1982) provided evidence of the regulation of arousal differences between introverts and extroverts. They found that when studying in a library, extroverts were more likely to choose to work in areas with bustle and activity, while introverts were more likely to choose a quiet, secluded area. Accordingly, Furnham and Bradley discovered that introverts presented with music at the time of two cognitive tasks (prose recall and reading comprehension) performed significantly worse on a test of memory recall than extroverts who were also presented with music at the time of the tasks. However, if music was not present at the time of the tasks, introverts and extroverts performed at the same level.

Hemispheric interference
Recent research has demonstrated that the normal right hemisphere of the brain responds to melody holistically, consistent with Gestalt Psychology, whereas the left hemisphere of the brain evaluates melodic passages in a more analytic fashion, similar to the feature-detecting capacity of the left-hemisphere’s visual field. For instance, Regalski (1977) demonstrated that while listening to the melody of the popular carol "Silent Night," the right hemisphere thinks, “Ah, yes, Silent Night," while the left hemisphere thinks, “two sequences: the first a literal repetition, the second a repetition at different pitch levels - ah, yes, Silent Night by Franz Gruber, typical pastorate folk style." The brain for the most part works well when each hemisphere performs its own function while solving a task or problem; the two hemispheres are quite complementary. However, situations arise when the two modes are in conflict, resulting in one hemisphere interfering with the operation of the other hemisphere.

Absolute pitch
Absolute pitch (AP) is the ability to produce or recognize specific pitches without reference to an external standard. People boasting AP have internalized pitch references, and thus are able to maintain stable representations of pitch in long-term memory. AP is regarded as a rare and somewhat mysterious ability, occurring in as few as 1 in 10,000 people. A method commonly used to test for AP is as follows: subjects are first asked to close their eyes and imagine that a specific song is playing in their heads. Encouraged to start anywhere in the tune they like, subjects are then instructed to try to reproduce the tones of that song by singing, humming, or whistling. Productions made by the subject are then recorded on digital audio tape, which accurately preserves the pitches they sing avoiding the potential pitch and speed fluctuations of analog recording. Lastly, the subjects' productions are compared to the actual tones sung by the artists on the CDs. Errors are measured in semitone deviations from the correct pitch.

Testing
The ability to recognize incorrect pitch (musical) is most often tested by using the Distorted Tunes Test (DTT). The DTT was originally developed in the 1940s and was used in large studies in the British population. The DTT measures musical pitch recognition ability on an ordinal scale, scored as the number of correctly classified tunes. More specifically the DTT is used to evaluate subjects on how well they judge whether simple popular melodies contain notes with incorrect pitch. Researchers have used this method to investigate genetic correlates of musical pitch recognition in both monozygotic and dizygotic twins. Drayna, Manichaikul, Lange, Snieder and Spector (2001) determined that the variation in musical pitch recognition is primarily due to highly heritable differences in auditory functions not tested by conventional audiologic methods. Therefore the DTT method may provide a benefit to advancing research studies similar to this one.

In infants
The following testing procedure has been used to assess infants’ ability to recall familiar, yet complex pieces of music (Reference 8, Ilari), and also their preference for timbre and tempo. The following procedure has demonstrated not only that infants attend longer to familiar than to unfamiliar pieces of music but also that infants remember the tempo and timbre of the familiarized melodies over long periods of time. This has bee demonstrated by the fact that by changing the tempo or timbre at test, one eliminates an infant’s preference for the novel melody. Thus, indicating that infants’ long-term memory representations are not simply of the abstract musical structure, but contain surface or performance features as well. This testing procedure contains three phases:


 * 1) Familiarization: The selected musical piece is given to parents/ caretakers on a CD. Parents and caretakers are instructed to listen to the musical piece three times a day, when the infant is in a quiet and alert state and the home environment is calm and peaceful.
 * 2) Retention: CDs are collected from the parents/caretakers immediately after the familiarization phase to ensure that no listening of the familiar piece occurs during the two-week retention phase.
 * 3) Test: Finally, infants are tested in the lab using the Headturn-preference procedure, a behavioral data-collection tool that measures preferences for one kind of auditory stimulus over another. The Headturn-preference procedure maintains that an infant will turn its head towards a stimulus it prefers. This procedure is conducted in a testing booth, with the infant sitting on the lap of his or her mother. A light is located on either side of the infant. The trial begins when the infant is looking straight ahead. Mother and experimenter are required to wear tight-fitting earphones which deliver masking music for the duration of the entire procedure. This is done to guarantee that neither mother, nor experimenter bias the infant’s response. During each trial, one sidelight flashes, urging the infant to look at it. Once the infant turns his or her head and looks at the light, the sound stimulus is played. The stimulus continues to play until the sound finishes or the infant looks away. When the infant turns away from the source for at least two seconds, sound and light turn off and the trial ends. A new trial begins when the infant looks at the center panel again.

Lyrical vs. instrumental memory
Many students listen to music while they study. Many of these students maintain that the reasons why they listen to music are to prevent drowsiness and to maintain their arousal for study. Some even believe that background music facilitates better work performance. However, Salame and Baddeley (1989) showed that both vocal and instrumental music interfered with performance of linguistic memory. They explained that disturbance in performance was caused by task-irrelevant phonological information using resources in the working memory system. This disturbance can be explained by the fact that the linguistic component of music can occupy the phonological loop, similar to the way speech does. This is further demonstrated by the fact that vocal music has been perceived to interfere more with memory then instrumental music, and nature sound music. Rolla (1993) explains that lyrics, being language, develop images that allow for the interpretation of experience in the communicative process. Current research coincides with this idea, and maintains that the sharing of experience by language in song may communicate feeling and mood much more directly than either language itself or instrumental music alone. Vocal music also affects emotion and mood much more swiftly than instrumental music. However, Fogelson (1973) reported instrumental music interfered with children’s performance on a reading comprehension test.

Development
Neural structures form and become more sophisticated as a result of experience. For example, the preference for consonance, the harmony or agreement of components, over dissonance, an unstable tone combination, is found early in development. Research suggests that this is due to the both the experiencing of structured sounds and the fact they stem from development of the basilar membrane and auditory nerve, two early developing structures in the brain. . An incoming auditory stimulus evokes responses measured in the form of an Event-related potential (ERP), measured brain responses resulting directly from a thought or perception. There is a difference in ERP measures for normally developing infants ranging from 2–6 months in age. Measures in infants 4 months and older demonstrate faster, more negative ERPs. In contrast, newborns and infants up to 4 months of age show slow, unsynchronized, positive ERPs. Trainor, et al(2003) hypothesized that these results indicated that responses from infants less than four months of age, are produced by subcortical auditory structures, whereas, with older infants, responses tend to originate in the higher cortical structures.

Relative and absolute pitch
There are two methods of encoding/remembering music. The first process is known as relative pitch, which refers to a person’s ability to identify the intervals between given tones. Therefore, the song is learned as a continuous succession of intervals. The alternate process is that of absolute pitch, is the ability to name or replicate a tone without reference to an external standard. Relative pitch has been shown to be more important than absolute pitch with regard to developing high musical talent. Relative pitch has also been credited with being the more sophisticated of the two processes as it allows for quick recognition regardless of pitch, timbre or quality, as well as it has the ability to produce physiological responses, for example if the melody violates the learned relative pitch. Relative pitch has been shown to develop at varying rates dependant on culture. Trehub and Schellenberg (2008) found that 5 and 6 year old Japanese children performed significantly better at a task requiring the utilization of relative pitch, than same aged Canadian children. They hypothesized that this could be due to the fact that the Japanese children have more exposure to pitch accent, via Japanese language and culture, than the predominantly stressed environment Canadian children experience.

Plasticity of musical development
Early acquisition of relative pitch allows for accelerated learning of scales and intervals. Musical training assists with the attentional and executive functioning necessary to interpret and efficiently encode music. In conjunction with brain plasticity, these processes become more and more stable. However, this process expresses a degree of circular logic in that the more learning that takes place, the greater the stability of the processes, ultimately decreasing overall brain plasticity. This could possibly explain the discrepancy in amount of effort both children and adults have to put into mastering new tasks.

Modal model
Atkinson and Shiffrin’s 1968 model consists of separate components for short and long term memory storage. It states that short term memory is limited by its capacity and duration. Research suggests that musical short term memory is stored differently than verbal short term memory. Berz (1995) found dissimilar results for the correlation between modality and recency effects in language versus music, suggesting that different encoding processes are engaged. Berz also demonstrated different levels of interference on tasks as a result of language stimulus vs. musical stimulus. Finally Berz (1995) provided evidence for a separate store theory through the “Unattended Music Effect”, stating “If there was a singular acoustic store, unattended instrumental music would cause the same disruptions on verbal performance as would unattended vocal music or unattended vocal speech;” this, however, [is] not the case.

Baddeley and Hitch’s model of working memory
Baddeley and Hitch’s 1974 Model consists of three components; one main component, the central executive and two sub components, the phonological loop and the visuospatial sketchpad. The central executive’s primary role is to mediate between the two sub-systems. The visuospatial sketchpad holds information about what we see. The phonological loop can be further divided into; the Articulatory control system, the ‘inner voice’ responsible for verbal rehearsal, and the the Phonological store the ‘inner ear’ responsible for speech-based storage. Major criticisms of this model include; a lack of musical processing/encoding and an ignorance towards other sensory inputs regarding the encoding and storage of olfactory, gustatory, and tactile inputs.

Theoretical model of memory
This theoretical model proposed by William Berz (1995) is based on the Baddeley and Hitch model. However, Berz modified the model to include a musical memory loop as a loose addition (meaning, almost a separate loop altogether) to the phonological loop. This new musical perceptual loop contains musical inner speech in addition to the verbal inner speech provided by the original phonological loop. He also proposed another loop to include other sensory inputs that were disregarded in the Baddeley and Hitch Model.

Koelsch’s model
In a model outlined by Stephen Koelsch and Walter Siebel, music stimuli are perceived in a successive timeline, breaking down the auditory input into different characteristics and meaning. He maintained that upon perception the sound reaches; the auditory nerve, brainstem and thalamus. At this point features involving; pitch height, chroma, timbre, intensity and roughness are extracted. This occurs about at about 10-100ms. Next, melodic and rhythmic grouping occurs, which is then perceived by auditory sensory memory. After this, an analysis is made of intervals and chord progressions. A harmony is then built upon the structure of metre, rhythm and timbre. This occurs from about 180-400ms after the initial perception. Following this, Structural reanalysis and repair occur, at about 600-900ms. Finally, the autonomic nervous system and multimodal association cortices are activated. Koelsch and Siebel proposed that from about 250-500ms, based on the sounds, meaning; interpretation and emotion occurs continuously throughout this process. This is indicated by N400 a negative spike at 400ms, as measured by an Event Related Potential.