Speech encoding

Speech coding is the compression of speech (into a code) for transmission with speech codecs that use audio signal processing and speech processing techniques.

The two most important applications using speech coding are mobile phones and internet phones.

The techniques used in speech coding are similar to that in audio data compression and audio coding where knowledge in psychoacoustics is used to transmit only data that is relevant to the human auditory system. For example, in narrowband speech coding, only information in the frequency band 400 Hz to 3500 Hz is transmitted but the reconstructed signal is still adequate for intelligibility.

However, speech coding differs from audio coding in that there is a lot more statistical information available about the properties of speech. In addition, some auditory information which is relevant in audio coding can be unnecessary in the speech coding context. In speech coding, the most important criterion is preservation of intelligibility and "pleasantness" of speech, with a constrained amount of transmitted data.

It should be emphasised that the intelligibility of speech includes, besides the actual literal content, also speaker identity, emotions, intonation, timbre etc. that are all important for perfect intelligibility. The more abstract concept of pleasantness of degraded speech is a different property than intelligibility, since it is possible that degraded speech is completely intelligible, but subjectively annoying to the listener.

In addition, most speech applications require low coding delay, as long coding delays interfere with speech interaction.

The A-law algorithm and the Mu-law algorithm are used in nearly all land-line long distance telephone communications. They can be seen as a kind of speech encoding, requiring only 8 bits per sample but giving effectively 12 bits of resolution.

The most common speech coding scheme is Code-Excited Linear Predictive (CELP) coding, which is used for example in the GSM standard. In CELP, the modelling is divided in two stages, a linear predictive stage that models the spectral envelope and code-book based model of the residual of the linear predictive model.

In addition to the actual speech coding of the signal, it is often necessary to use channel coding for transmission, to avoid losses due to transmission errors. Usually, speech coding and channel coding methods have to be chosen in pairs, with the more important bits in the speech data stream protected by more robust channel coding, in order to get the best overall coding results.

The Speex project is an attempt to create a free software speech coder, unemcumbered by patent restrictions.

Major subfields:
 * Wide-band speech coding
 * AMR-WB for WCDMA networks
 * VMR-WB for CDMA2000 networks
 * Narrow-band speech coding
 * FNBDT for military applications
 * SMV for CDMA networks
 * Full Rate, Half Rate, EFR, AMR for GSM networks