Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social |
Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology |

Cognitive Psychology: Attention · Decision making · Learning · Judgement · Memory · Motivation · Perception · Reasoning · Thinking  - Cognitive processes Cognition - Outline Index

Cognitive musicology is a branch of cognitive science concerned with computationally modeling musical knowledge with the goal of understanding both music and cognition.[1]

Cognitive musicology can be differentiated from the fields of music cognition, music psychology and cognitive neuroscience of music by a difference in methodological emphasis. Cognitive musicology uses computer modeling to study music-related knowledge representation and has roots in artificial intelligence and cognitive science. The use of computer models provides an exacting, interactive medium in which to formulate and test theories.[2]

This interdisciplinary field investigates topics such as the parallels between language and music in the brain. Biologically inspired models of computation are often included in research, such as neural networks and evolutionary programs.[3] This field seeks to model how musical knowledge is represented, stored, perceived, performed, and generated. By using a well-structured computer environment, the systematic structures of these cognitive phenomena can be investigated.[4]

Notable researchersEdit

The polymath Christopher Longuet-Higgins, who coined the term "cognitive science", is one of the pioneers of cognitive musicology. Among other things, he is noted for the computational implementation of an early key-finding algorithm.[5] Key finding is an essential element of tonal music, and the key-finding problem has attracted considerable attention in the psychology of music over the past several decades. Carol Krumhansl and Mark Schmuckler proposed an empirically grounded key-finding algorithm which bears their names.[6] Their approach is based on key-profiles which were painstakingly determined by what has come to be known as the probe-tone technique.[7] This algorithm has successfully been able to model the perception of musical key in short excerpts of music, as well as to track listeners' changing sense of key movement throughout an entire piece of music.[8] David Temperley, whose early work within the field of cognitive musicology applied dynamic programming to aspects of music cognition, has suggested a number of refinements to the Krumhansl-Schmuckler Key-Finding Algorithm.[9]

Otto Laske was a champion of cognitive musicology.[10] A collection of papers that he co-edited served to heighten the visibility of cognitive musicology and to strengthen its association with AI and music.[11] The foreword of this book reprints a free-wheeling interview with Marvin Minsky, one of the founding fathers of AI, in which he discusses some of his early writings on music and the mind.[12] AI researcher turned cognitive scientist Douglas Hofstadter has also contributed a number of ideas pertaining to music from an AI perspective.[13] Musician Steve Larson, who worked for a time in Hofstadter's lab, formulated a theory of "musical forces" derived by analogy with physical forces.[14] Hofstadter [15] also weighed in on David Cope's experiments in musical intelligence,[16] which take the form of a computer program called EMI which produces music in the form of, say, Bach, or Chopin, or Cope.

Cope's programs are written in Lisp, which turns out to be a popular language for research in cognitive musicology. Desain and Honing have exploited Lisp in their efforts to tap the potential of microworld methodology in cognitive musicology research.[17] Also working in Lisp, Heinrich Taube has explored computer composition from a wide variety of perspectives.[18] There are, of course, researchers who chose to use languages other than Lisp for their research into the computational modeling of musical processes. Tim Rowe, for example, explores "machine musicianship" through C++ programming.[19] A rather different computational methodology for researching musical phenomena is the toolkit approach advocated by David Huron.[20] At a higher level of abstraction, Gerraint Wiggins has investigated general properties of music knowledge representations such as structural generality and expressive completeness.[21]

Although a great deal of cognitive musicology research features symbolic computation, notable contributions have been made from the biologically inspired computational paradigms. For example, Jamshed Bharucha and Peter Todd have modeled music perception in tonal music with neural networks.[22] Al Biles has applied genetic algorithms to the composition of jazz solos.[23] Numerous researchers have explored algorithmic composition grounded in a wide range of mathematical formalisms.[24][25]

Within cognitive psychology, among the most prominent researchers is Diana Deutsch, who has engaged in a wide variety of work ranging from studies of absolute pitch and musical illusions to the formulation of musical knowledge representations to relationships between music and language.[26] Equally important is Aniruddh D. Patel, whose work combines traditional methodologies of cognitive psychology with neuroscience. Patel is also the author of a comprehensive survey of cognitive science research on music.[27]

Perhaps the most significant contribution to viewing music from a linguistic perspective is the Generative Theory of Tonal Music (GTTM) proposed by Fred Lerdahl and Ray Jackendoff.[28] Although GTTM is presented at the algorithmic level of abstraction rather than the implementational level, their ideas have found computational manifestations in a number of computational projects.[29][30]

See also Edit

Related fields Edit


  1. Laske, Otto (1999). Navigating New Musical Horizons (Contributions to the Study of Music and Dance), Westport: Greenwood Press.
  2. Laske, O. (1999). AI and music: A cornerstone of cognitive musicology. In M. Balaban, K. Ebcioglu, & O. Laske (Eds.), Understanding music with ai: Perspectives on music cognition. Cambridge: The MIT Press.
  3. Graci, C. (2009-2010) A brief tour of the learning sciences featuring a cognitive tool for investigating melodic phenomena. Journal of Educational Technology Systems, 38(2), 181-211.
  4. Hamman, M., 1999. "Structure as Performance: Cognitive Musicology and the Objectification of Procedure," in Otto Laske: Navigating New Musical Horizons, ed. J. Tabor. New York: Greenwood Press.
  5. Longuet-Higgins, C. (1987) Mental Processes: Studies in cognitive science. Cambridge, MA, US: The MIT Press.
  6. Krumhansl, Carol (1990). Cognitive Foundations of Musical Pitch, Oxford Oxfordshire: Oxford University Press.
  7. Krumhansl, C. and Kessler, E. (1982). Tracing the dynamic changes in perceived tonal organisation in a spatial representation of musical keys. "Psychological Review, 89", 334-368,
  8. Schmuckler, M. A., & Tomovski, R.(2005) Perceptual tests of musical key-finding. Journal of Experimental Psychology: Human Perception and Performance, 31, 1124-1149,
  9. Temperley, David (2001). The Cognition of Basic Musical Structures, Cambridge: MIT Press.
  10. Laske, Otto (1999). Otto Laske, Westport: Greenwood Press.
  11. Balaban, Mira (1992). Understanding Music with Ai, Menlo Park: AAAI Press.
  12. Minsky, M. (1981). Music, mind, and meaning. Computer Music Journal, 5(3), 28-44. Retrieved December 1, 2009 from
  13. Hofstadter, Douglas (1999). G©œdel, Escher, Bach, New York: Basic Books.
  14. Larson, S. (2004). Musical Forces and Melodic Expectations: Comparing Computer Models with Experimental Results. "Music Perception, 21"(4), 457-498
  15. Cope, David (2004). Virtual Music, Cambridge: The MIT Press.
  16. Cope, David (1996). Experiments in Musical Intelligence, Madison: A-R Editions.
  17. Honing, H. (1993). A microworld approach to formalizing musical knowledge. "Computers and the Humanities, 27"(1), 41-47
  18. Taube, Heinrich (2004). Notes from the Metalevel, New York: Routledge.
  19. Rowe, Robert (2004). Machine Musicianship, City: MIT Pr.
  20. Huron, D. (2002). Music Information Processing Using the Humdrum Toolkit: Concepts, Examples, and Lessons. "Computer Music Journal, 26"(2), 11-26.
  21. Wiggins, G. et al. (1993). A Framework for the Evaluation of Music Representation Systems. "Computer Music Journal, 17"(3), 31-42.
  22. Bharucha, J. J., & Todd, P. M. (1989). Modeling the perception of tonal structure with neural nets. Computer Music Journal, 44−53
  23. Biles, J. A. 1994. "GenJam: A Genetic Algorithm for Generating Jazz Solos." Proceedings of the 1994 International Computer Music Conference. San Francisco: International Computer Music Association
  24. Nierhaus, Gerhard (2008). Algorithmic Composition, Berlin: Springer.
  25. Cope, David (2005). Computer Models of Musical Creativity, Cambridge: MIT Press.
  26. Deutsch, Diana (1999). The Psychology of Music, Boston: Academic Press.
  27. Patel, Aniruddh (1999). Music, Language, and the Brain, Oxford: Oxford University Press.
  28. Lerdahl, Fred; Ray Jackendoff (1996). A Generative Theory of Tonal Music, Cambridge: MIT Press.
  29. Katz, Jonah, David Pesetsky (May 2009). The Recursive Syntax and Prosody of Tonal Music. Recursion: Structural Complexity in Language and Cognition Conference at UMass Amherst.
  30. Hamanaka, Masatoshi, Hirata, Keiji; Tojo, Satoshi (1 December 2006). Implementing 'A Generative Theory of Tonal Music'. Journal of New Music Research 35 (4): 249–277.

Further readingEdit

  • Seifert, Uwe (2010): Investigating the Musical Mind: Situated Cognition, Artistic Human-Robot Interaction Design, and Cognitive Musicology (English/Korean). In: Principles of Media Convergence in the Digital Age. Proceedings of the EWHA HK International Conference 2010, pp. 61–82.
  • Seifert, Uwe (1991): The Schema Concept: A Critical Review of its Development and Current Use in Cognitive Science and Research on Music Perception. In: A. Camurri/C. Canepa (Eds.), Proceedings of the IX CIM Colloquium on Musical Informatics, Genova: AIMI/DIST, pp. 116–131.
  • Aiello, R., & Sloboda, J. (1994). Musical perceptions. Oxford Oxfordshire: Oxford University Press. —A balanced collection of papers by some of the leading figures in the field of music perception and cognition. Opening chapters on emotion and meaning in music (by Leonard B. Meyer) and the Music as Language metaphor (Rita Aiello) are followed by a range of insightful papers on the perception of music by Niclolous Cook, W. Jay Downling, Jamshed Baruscha, and others.
  • Levitin, D. (2007). This is your brain on music. New York: Plume. —Recording engineer turned music psychologist Daniel Levitin talks about the psychology of music in an up tempo, informal, and personal way. Examples drawn from rock and related genres and the limited use of technical terms are two features of the book that make the book appealing to a wide audience.
  • Jourdain, R. (1997). Music, the brain, and ecstasy. New York: Harper Collins. —A far-reaching study of how music captivates us so completely and why we form such powerful connections to it. Leading us to an understanding of the pleasures of sound, Robert Jourdain draws on a variety of fields including science, psychology, and philosophy.

External linksEdit

This page uses Creative Commons Licensed content from Wikipedia (view authors).
Community content is available under CC-BY-SA unless otherwise noted.