Auditory system

The auditory system is the sensory system for the sense of hearing.

Ear
See Main Article Ear

Outer ear
The folds of cartilage surrounding the ear canal are called the pinna. Sound waves are reflected and attenuated when they hit the pinna, and these changes provide additional information that will help the brain determine the direction from which the sounds came.

The sound waves enter the ear canal, a deceptively simple tube. The ear canal amplifies sounds that are between 3 and 12 kHz. At the far end of the ear canal is the eardrum, which marks the beginning of the middle ear.

Middle ear
Sound waves travelling through the ear canal will hit the tympanum, or “eardrum”. This wave information travels across the air-filled middle ear cavity via a series of delicate bones: the malleus (hammer), incus (anvil) and stapes (stirrup). These ossicles act as a lever and a teletype, converting the lower-pressure eardrum sound vibrations into higher-pressure sound vibrations at another, smaller membrane called the oval window. Higher pressure is necessary because the inner ear beyond the oval window contains fluid rather than air. The sound is not amplified uniformly across the ossicular chain. The auditory reflex of the middle ear muscles helps protect the inner ear from damage. The middle ear still contains the sound information in wave form; it is converted to nerve impluses in the cochlea.

Inner ear
The inner ear consists of the cochlea and several non-auditory structures. The cochlea has three fluid filled sections. Strikingly, one of these sections contains an extracellular fluid similar in composition to endolymph, which is usually found inside of cells.

The organ of Corti forms a ribbon of sensory epithelium which runs lengthwise down the entire cochlea. The hair cells of the organ of Corti transform the fluid pressure waves into nerve signals. The journey of a billion nerves begins with this first step; from here further processing leads to a panoply of auditory reactions and sensations.

Hair cell
Hair cells are columnar cells, each with a bundle of 100-200 specialized cilia at the top, for which they are named. These cillia are the mechanosensors for hearing. Lightly resting atop the longest cilia is the tectorial membrane. which moves back and forth with each cycle of sound, tilting the cilia and allowing electric current into the hair cell (2).

<--The hair cells in the mammalian cochlea are of two types: inner and outer hair cells. They differ in their structural and functional properties. Inner hair cells are responsible for the largest neural projection to the central nervous system -->

.

Hair cells, like the photoreceptors of the eye, show a graded response, instead of the spikes typical of other neurons. These graded potentials are not bound by the “all or none” properties of an action potential.

At this point, one may ask how such a wiggle of a hair bundle triggers a difference in membrane potential. The current model is that all of the cilia are attached by structures called “tip links” which link the tips of one cilium to another via calcium-potassium gates. When the cilia are wiggled by a sound stimulus carried in the basilar membrane, the tip links pull the ion gates open and allow calcium and potassium to enter.

Neuron to hair relationship
There are far fewer hair cells than afferent nerve fibers in the cochlea. The nerve that innervates the cochlea is the vestibulocochlear nerve, or cranial nerve number VIII.

Neuronal dendrites innervate cochlear hair cells. The neurotransmitter itself is thought to be glutamate. At the presynaptic juncture, there is a distinct “presynaptic dense body” or ribbon. This dense body is surrounded by synaptic vesicles and is thought to aid in the fast release of neurotransmitter.

Efferent projections from the brain to the cochlea also play a role in the perception of sound. Efferent synapses occur on outer hair cells and on afferent dendrites under inner hair cells.

Overview
This sound information, now re-encoded, travels down the auditory nerve, through parts of the brainstem (for example, the cochlear nucleus and inferior colliculus), further processed at each waypoint. The information eventually reaches the thalamus, and from there it is relayed to the cortex. In the human brain, the primary auditory cortex is located in the temporal lobe.

The cochlear nucleus
The cochlear nucleus is the first site of the neuronal processing of the newly converted “digital” data from the inner ear. The information is brought via the cochlear nerve to the aforementioned nucleus, where there is an organization of sound information. The lower frequency axons follow a similar path, innervating the ventral portions of the dorsal cochlear nucleus and the ventrolateral portions of the anteroventral cochlear nucleus. In contrast, the axons from the higher frequency hair cells project to the dorsal portion of the anteroventral cochlear nucleus and the uppermost dorsal portions of the dorsal cochlear nucleus. The mid frequency projections end up in between the two extremes, in this way the frequency spectrum is preserved.

There are four types of cells found in the cochlear nucleus. Stellate cells have a radial, star-like, shape which is where they get their first name. Their second name, chopper cells, is in reference to their ability to fire consistently despite background noise. Chopper cells are also not affected by slight variations in frequency, and in this way encode for the characteristic frequency of the specific neuron, or preceding hair cell. These cells are only found in the ventral half of the cochlear nucleus.

Bushy cells are composed of a single, very short dendrite with numerous small branchings, which cause it to resemble a “bush”. These cells are usually only innervated by a select few axons, which dominate its firing patterns. These efferent axons wrap their branches around the entire soma, choking the bushy cells with their large end bulbs into firing whenever they please. This being said, a single unit recording of an electrically stimulated bushy neuron characteristically produces exactly one action potential. In this way, the bushy cell is thought to respond only to the occurrence of a new sound and to help in its localization. Along with the chopper cells, the bushy cells are only found in the ventral portion of the cochlear nucleus.

Fusiform cells have been known to be excitatory or inhibitory and located solely within the dorsal cochlear nucleus. Whist the bushy cells aide in the location of a sound stimulus on the horizontal axis, fusiform cells locate the sound stimulus on the vertical axis. With the combined power of these two types of cells, an ordinary man can locate where a firecracker explodes without the use of his eyes. These cells are known to reside solely in the dorsal cochlear nucleus.

Whilst the fusiform and bushy cells take care of locating a sound stimulus in the first and second dimensions of space, the octopus cells seem to take care of the fourth dimension (17). Electrical stimuli to the auditory nerve have been shown to evoke a graded post synaptic potential in the octopus cells. These EPSP’s are known to be brief and consistently 1 millisecond, and is not dependent on the stimulus strength, although there is minimum voltages requirement necessary elicit this effect. In this way, the octopus cells have been thought to be active in the process of timing. These cells are located in the posteroventral cochlear nucleus, but make several connections with many of the auditory nerve fibers. This is thought to be the pathway which carries information concerning action potentials (2).

Figure four, seen below, illustrates individual stereotypical behavior for the four major types of cells in the cochlear nucleus. The column labeled the “intrinsic properties” indicates the action potentials generated by the labeled neuron in response to a typical depolarizing stimulus. This column also displays the reaction of the neurons to depolarizing currents. The depolarizing and hyperpolarizing currents are equal and opposite in magnitude, one being 0.4 nA and the other being -0.4 nA. The EPSP column shows the stereotypical post synaptic potential evoked by these neurons when stimulated by an instantaneous short burst or electrical activity. This is in contrast to the left column which illustrates a pulse sustained for 50 milliseconds.

Projections from cochlear nucleus
There are three major projections from the cochlear nucleus. Through the medulla, one projection bifurcates, and shoots to the contralateral the superior olivary complex via the trapezoid body, whilst the other half shoots to the ipsilateral SOC. This projection is called the ventral acoustic stria. Another projection, called the dorsal acoustic stria, rises above the medulla into the pons where it hits the nucleus of the lateral lemniscus along with its kin, the intermediate acoustic stria. The IAS decussates across the medulla, before hitting the contralateral lateral lemniscus. The lateral lemniscus, in turn projects to the contralateral lateral lemniscus, as well as the inferior colliculus. The inferior colliculus receives projections from the superior olivary complex and the contralateral dorsal acoustic stria, as well as the aforementioned lateral lemniscus.

It is here at the inferior colliculus that the simple processing of sound stops for some species. The indirect projections from the ipsilateral cochlear nucleus stop at the inferior colliculus. The contralateral projections are the only ones which continue on to the medial geniculate nucleus of the thalamus, and consequentially the cortex. The auditory cortex is a part of the superior temporal gyrus.

Superior Olivary Complex
The superior olivary complex is located in the pons and receives projections predominantly from the anteroventral cochlear nucleus, although the posteroventral nucleus projects there as well, via the ventral acoustic stria. It is at this site where the first binaural interactions occur (4). The superior olivary complex is divided into three parts; the medial, lateral and the Trapezoid body (3).

Medial superior olive
The medial superior olive is thought to help locate the position of a sound on the azimuth axis (3). The azimuth axis is the angle from a certain direction, ie: 32 degrees from north. One’s first instincts may be to think that this nucleus includes vertical information, but this is not the case. The fusiform cells do not project to anything in the level of the pons, and only come into play at the inferior colliculus. Only horizontal data is present, but it does come from two different ear sources, which aides in the localizing of sound on the azimuth axis (2). The way in which the superior olive does this is by measuring the differences in time between two ear signals recording the same stimulus. Traveling around the head takes about 700 μs, and it is assumed that the medial superior olive is able to detect this. In fact, it is observed that people can detect interaural differences down to 10 microseconds (2). Kandell claims that this nucleus is tonotopographically organized, but recent evidence (3) disagrees. Dr. Douglas Oliver claims that the tonotopographical map of the medial superior olive is “most likely a complex, nonlinear map.”

The projections of the ipsilateral medial superior olive terminate densely in the central nucleus of the inferior colliculus. The majority of these axons are considered to be “round shaped” or type R. These R axons are mostly glutamatergic and contain round synaptic vesicles and form asymmetric synaptic junctions (4)

Lateral superior olive
This olive has similar functions to the medial superior olive, but employs intensity to home in on a sound source. This is the part of the brain stem that labels the louder sound form the left ear as being on the left hand side. The lateral olive receives input from both cochlear nuclei, although the contralateral projections are received indirectly through the nucleus of trapezoid body. The contralateral and ipsilateral inputs are in stark opposition to one another, and thusly the cells in the lateral superior olive fire accordingly when one lateral input is greater than the other.

The projections from the contralateral lateral superior olive go to the central nucleus of the inferior colliculus. The types of axonal projections are both round (or R), as seen above, and also a small amount of pleomorphic or PL axons. Pleomorphic is a synonym for polymorphic or protean, and means the axons are capable of being found in many forms (5). The pleomorphic axons are mostly inhibitory, with the neurotransmitters glycine and GABA at their disposal, stored inside pleomorphic synaptic vesicles. These axons form symmetrical synapses with the neurons of the inferior colliculus and terminate in a less dense fashion.

The projections of the ipsilateral lateral superior olive project equal proportions of round and pleomorphic axons. The densities of the axons termination spots of this olivary pathway are not uniform, and contributes the highest density of PL axons to the inferior colliculus.

Inferior Colliculus
The inferior colliculi of the midbrain, are located just below the visual processing centers known as the superior colliculi. The inferior colliculus is the first place where vertically orienting data from the fusiform cells in the dorsal cochlear nucleus can finally synapse with horizontally orienting data. This homecoming of the aural dimensions puts these dual mesencephalic bumps in the position to fully integrate all the sound location data.

The inferior colliculus has a relatively high metabolism in the brain. The Conrad Simon memorial institute research measured the blood flow of the IC and put a number at 1.80 cc/gm/min in the cat brain. For reference, the runner up in the included measurements was the somatosensory cortex at 1.53. This indicates that the inferior colliculus is metabolically more active than many other parts of the brain. The hippocampus, normally considered to use up a disproportionate amount of energy, was not measured or compared (6).

Skottun et al. ran a study of the neurons in the inferior colliculus, comparing the single unit information to the behavioral response in guinea pigs. They found that the interaural time delay discrimination in the neurons was comparable to the behavioral ability to detect the source of a sound. If these findings were not consistent, then the localization of sound must work through a different mechanism (7).

The inferior colliculus receives input from both the ipsilateral and contralateral cochlear nucleus and respectively the corresponding ears. Of course, there is some lateralization, the dorsal projections (containing vertical data) only project to the contralateral inferior colliculus. This inferior colliculus contralateral to the ear it is receiving the most information from, then projects to its ipsilateral medial geniculate nucleus.

Medial Geniculate Nucleus
This nucleus is part of the thalamic relay system, even though it is located in the rostral midbrain and not the diencephalon (2). This nucleus is organized into three separate layers, and maintains the tonotopic organization of most other preceding nuclei. Most functions of the inferior colliculus are preserved, and are relayed to the auditory cortex.

Primary Auditory Cortex


The auditory cortex is the most highly organized processing unit of sound in the brain. This cortex area is the neural crux of hearing, and, in humans, language and music.

The auditory cortex is divided into three separate parts, the primary, secondary and tertiary auditory cortex. These structures are formed concentrically around one another, with the primary AC in the middle and the tertiary AC on the outside.

The primary auditory cortex is tonotopically organized, which means that certain cells in the auditory cortex are sensitive to specific frequencies. This is a fascinating function which has been preserved throughout most of the audition circuit. This area of the brain “is thought to identify the fundamental elements of music, such as pitch and loudness (17).” This makes sense as this is the area which receives direct input from the medial geniculate nucleus of the thalamus. The secondary auditory cortex has been indicated in the processing of “harmonic, melodic and rhythmic patterns (17).” The tertiary auditory cortex supposedly integrates everything into the overall experience of music (17).

An evoked response study of congenitally deaf kittens by Klinke et al. utilized field potentials to measure cortical plasticity in the auditory cortex. These kittens were stimulated and measured against a control or un-stimulated CDC and normal hearing cats. The field potentials measured for artificially stimulated CDC was eventually much stronger than that of a normal hearing cat(8). This is in concordance with Eckart Altenmuller’s study where it was observed that students who received musical instruction had greater cortical activation than those who did not (9).

The auditory cortex exhibits some strange behavior pertaining to the gamma wave frequency. When subjects are exposed to three or four cycles of a 40 hertz click, an abnormal spike appears in the EEG data, which is not present for other stimuli. The spike in neuronal activity correlating to this frequency is not restrained to the tonotopic organization of the auditory cortex. It has been theorized that this is a “resonant frequency” of certain areas of the brain, and appears to affect the visual cortex as well (10).

Gamma band activation (20 to 40 Hz) has been shown to be present during the perception of sensory events and the process of recognition. Kneif et al, in their 2000 study, presented subjects with eight musical notes to well known tunes, such as Yankee Doodle and Frere Jacques. Randomly, the sixth and seventh notes were omitted and an electroencephalogram, as well as a magnetoencephalogram were each employed to measure the neural results. Specifically, the presence of gamma waves, induced by the auditory task at hand, were measured from the temples of the subjects. The OSP response, or omitted stimulus response, was located in a slightly different position; 7mm more anterior, 13mm more medial and 13mm more superior in respect to the complete sets. The OSP recordings were also characteristically lower in gamma waves, as compared to the complete musical set. The evoked responses during the sixth and seventh omitted notes are assumed to be imagined, and were characteristically different, especially in the right hemisphere (12). The right auditory cortex has long been shown to be more sensitive to tonality, while the left auditory cortex has been shown to be more sensitive to minute sequential differences in sound specifically speech.

Hallucinations have been shown to produce oscillations which are parallel (although not exactly the same as) the gamma frequency range. Sperling showed in his 2004 study that auditory hallucinations produce band wavelengths in the range of 12.5-30 Hz. The bands occurred in the left auditory cortex of a schizophrenic and were controlled against 13 controls (18). This aligns with the studies of people remembering a song in their minds; they do not perceive any sound, but experience the melody, rhythm and overall experience of sound. When schizophrenics experience hallucinations, it is the primary auditory cortex which becomes active. This is characteristically different than remembering a sound stimulus, which only faintly activates the tertiary auditory cortex (16). By deduction, an artificial stimulation of the primary auditory cortex should elicit an incredibly real auditory hallucination. The termination of all audition and music into the tertiary auditory cortex creates a fascinating nexus of aural information. If this theory is true, it would be interesting to study a subject with a damaged, TAC or one with artificially suppressed function. This would be very difficult to do as the tertiary cortex is simply a ring around the secondary, which is a ring around the primary AC.

Tone is perceived in more places than just the auditory cortex; one specifically fascinating area is the rostromedial prefrontal cortex (13). Janata et al, in their 2002 study, used an fMRI machine to study the areas of the brain which were active during tonality processing. The result of which displayed several areas which are not normally considered to be part of the audition process. The rostromedial prefrontal cortex is a subsection of the medial prefrontal cortex, which projects to the amygdala, and is thought to aid in the inhibition of negative emotion (14). The medial prefrontal cortex is thought to be the core developmental difference between the impulsive teenager and the calm adult. The rostromedial prefrontal cortex is tonality sensitive, meaning it is activated by the tones and frequencies of resonant sounds and music. It could be hypothesized that this is the mechanism by which music ameliorates the soul (or, if one prefers, the limbic system).