What percentage of the bits on a CD is dedicated to error-correction?

1. Oct 9, 2014

hitemup

The problem statement, all variables and given/known data

On an audio compact disc, digital bits of information are encoded sequentially along a spiral path. Each bit occupies about 0.28 micrometers. A CD player's readout laser scans along the spiral's sequence of bits at a constant speed of about 1.2 m/s as the CD spins.
a) Determine the number N of digital bits that a CD player reads every second.
b) The audio information is sent to each of the two loudspeakrs 44,100 times per second. Each of these samplings requires 16 bits and so one would think the required bit rate for a CD player is
N0= 1.4*106 bits/second. The excess number of bits(N-N0) is needed for encoding and error-correction. What percentage of the bits on a CD are dedicated to encoding and error-correction?

The attempt at a solution

1.2 / (0.28 * 10^-6) = 4.3 * 10^6 bits per second.
((4.3-1.4)/ 4.3) * 100 = 67% for encoding and error correction.

Having seen the high percentage, I've thought that something may have gone wrong, so I ask for your help to make sure if I solved this correctly or not.

2. Oct 9, 2014

RUber

The logic looks right to me.

3. Oct 9, 2014

hitemup

Last edited: Oct 9, 2014
4. Oct 9, 2014

RUber

In your linked thread, it looks like they added a zero somewhere. 1.2/.28 = 4.286. m/$\mu$m=$10^6$. I agree that 67% seems high for error correction, but from the information you have posted, it could not be anything else.
$\frac{1bits*1.2m}{.28 (10^{-6})m * sec}$ is N
$\frac{(N-N_0)}{N}$ is the proper proportion.
If your initial numbers are correct, you should be confident in your method.

5. Oct 9, 2014

rcgldr

From wiki article: with this, a frame ends up containing 588 bits of "channel data" (which are decoded to only 192 bits music) , so 396 bytes of overhead for encoding, error correction, ... , which corresponds to ~67.347% overhead.

ce data encoding.htm