How Does Linear Predictive Coding (LPC) Reduce Bandwidth in Speech Coding?

  • Thread starter Thread starter jeejou
  • Start date Start date
  • Tags Tags
    Coding Linear
AI Thread Summary
Linear Predictive Coding (LPC) reduces bandwidth in speech coding by modeling the speech signal as a linear combination of its past samples, allowing for efficient representation. The process involves predicting future samples based on past data, which means only the differences between the actual and predicted values need to be transmitted, significantly lowering the data rate. Techniques like the Levinson-Durbin algorithm can be used to compute the LPC coefficients that define the predictive model. By encoding the signal with fewer bits—such as using 8 bits instead of 16—the overall bandwidth required for transmission is effectively halved. Understanding these principles is crucial for optimizing speech compression in mobile communications.
jeejou
Messages
10
Reaction score
0
Hello everyone,

Question : One version of Linear Predictive Coding (LPC) has been adopted as a standard for speech compression in mobile communications systems. How Linear Predictive Coding (LPC) can reduce the bandwidth in this speech coding ?

Can someone please help me with this question. All I can find in the net is "LPC is the process used for reducing the bandwidth in speech coding". I would like to know how the process is undertaken ? Can someone give me pointers/clue/lectures ?

Thank you in advance.
 
Physics news on Phys.org
Wikipedia is (usually) your friend http://www.engineer.tamuk.edu/SPark/chap7.pdf
 
Last edited by a moderator:
Hi there mgb_phys,

I went through the webpage earlier. Unfortunately it does not say how exactly the LPC reduce the bandwidth. I mean the version of the LPC is VSELP, right ? But, what are the processes involved ? Does the Levinson-Durbin algorithm can be matched to answer the question ? Or there is another technique ? I AM CONFUSED! Anyway, THANKS A LOT FOR THE HELP.
 
I'm not an expert but generally fo rthese sort of things.

suppose you have a signal that is say 16bits, but it isn' random - your voice can't suddenly change 86dB in a 1/20,000 second. You can fit a function of what the sound is going to do in the next smll time interval and then store in say 8bits, the difference between the measured sound and the function. Then to play it back just add the 8bit offset to the function. You need a couple of other codes to say when the function has changed.
Since you only need 8bits instead of 16 you have halved the bandwidth.

The details of the coding scheme I don't know.
 
Back
Top