Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Digital vs Analog - Noise Distortion

  1. Sep 5, 2011 #1
    In my textbook it says this:

    "Digital communication, which can withstand channel noise and distortion much better than analog as long as the noise and the distortion are within limits, is more rugged than analog communication. With analog messages, on the other hand, any distortion or noise, no matter how small, will distort the recevied signal."

    Since a digital signal already has quantization error, wouldn't the distortion caused by noise in an analog signal be no worse than the quantization error in a digital signal which can withstand the same noise?
     
    Last edited: Sep 5, 2011
  2. jcsd
  3. Sep 6, 2011 #2
    Depends on the level of noise and the number of quantization levels you are using.

    As a very rough example: Suppose you are amplitude modulating. The digital case uses some scheme where bands of amplitudes represent bits such that every 200mV was a new bit. Now, say your quantization level was 0.1uV and you had noise on the channel of 1mV.

    The digital system could transmit with a worse case error of 0.1uV but an anlog system would have 10x that error.
     
  4. Sep 6, 2011 #3
    I once heard an audio comparison between an FM audio channel and a digital channel. The digital channel was noise free down to about -105 dBm where it started getting choppy and finally dropped out all together. The analog channel had noise but was intelligible down to -118 dBm. Which would you say was better?
     
  5. Sep 6, 2011 #4

    rbj

    User Avatar

    i dunno if i would agree with the textbook.

    it is well known in the audio world that digital does not degrade gracefully. anyone with a real digital TV (not cable) can see that. what would be snow in an old analog TV becomes this terrible pixelization and eventual loss in the new digital. but the digital TV looks great until the added noise is large enough that it start to degrade.

    a little bit of noise hurts nothing in a digital signal. the 1s remain 1 and the 0s remain 0. but a little bit of noise hurts an analog signal a little bit.
     
  6. Sep 6, 2011 #5
    For practical reasons we have developed forward error correction (FEC) algorithms designed to correct all errors as long as the channel gives them better than about 1/50 BER. This is by design since that approach has the most general use.. But the statement seems to imply that behavior is some fundamental law. ideally a FEC should be able to be tweaked to fail optimally for the given use of the data as SNR degrades.

    The statement is too sweeping and neglects quite a few considerations, like:

    1. If the source data is sound or video and will be interpreted by a human, then (assuming source and channel coding being equal) analog is better because humans are better at filtering analog noise in audio or video than anything else out there.

    2. We currently can do digital error correction much better than analog error correction simply because we've figured it out better (in fact, analog REC and FEC are still laboratory novelties, as far as I know). So here digital is better for a purely practical reason. The reason we've pursued digital error correction so much more than analog error correction is that analog error correction must be designed per channel and type of data. Digital error correction works with any digital data (although in some cases it is somewhat tweaked for the type of data and typical channel behavior).

    3. For purely practical reasons, digital is convenient for channels with very low SNR, like hard disks and computer data buses.

    If you want to wax theoretical (i.e. what would a super advanced civilization use) then it would be filling the channel to the brim with good FEC or REC (if possible) so Shannon-Hartley was optimized, but with degradation with reduction in SNR in a way that depended on the use of the data. For example, audio and video would degrade such that the brain receiving it could always get the most info. My guess is that some amount of analog channel coding is necessary to realize perfect (Shannon-Hartley) data rate.

    Currently we must use analog integration to get the BER better than about 1/50 so that our current FEC algorithms can still work. (Any worse than 1/50 BER blows are best FEC out of the water). And this is why typical digital communication works wonderfully or not at all. We don't degrade very gracefully because we can get the most bits across on average by raising the BER to 1/50 through simple (and inefficient) analog redundancy, then digital FEC (plus maybe some REC) does the rest.
     
    Last edited: Sep 6, 2011
  7. Nov 5, 2011 #6
    In the previous post, where I said "very low SNR" I meant to say "very high SNR".
     
  8. Nov 5, 2011 #7

    D H

    User Avatar
    Staff Emeritus
    Science Advisor

    Funny, but that is exactly what my cable does. My provider does provides digital content, and when the signal does go bad it goes bad in a very big way. But when the signal is good (and not bandwidth limited), it is clean. Well, except maybe for some Gibbs phenomena, which you can see/hear even when the signal is perfect. Solution: Crank up the bass. Then no one will know that the high frequencies suffer.

    I think this is what the textbook was after. The text did qualify its statement with "as long as the noise and the distortion are within limits." A little bit of noise in an analog signal is going to come across as a little bit of white noise in the picture / sound. A little bit of noise in a digital signal can be completely removed thanks to redundancies (error correcting codes) embedded in the signal.
     
  9. Nov 5, 2011 #8
    I am no expert in communication. I read some books about it. My thinking is if the digital signal is truely "1" and "0", be it using edges or level, it should be more reliable as it can take a lot of distortion before you loss it. Analog signal change when you start adding distortion.

    BUT, now a days digital communication formats are not 1 and 0.....far from it. Like the QAM, you can have up to 256 pieces of information in one bit. It is a quadrature signal of two signals and depend of the "analog level" you create a 2D map. You get right back to the analog distortion problem again.

    The good thing about the digital communications is they always have checks and feedbacks. For example they have check sum for each bit, if the check sum fail, the reciever can tell the transmitter to re-transmit the info again. That is the reason a bad connection slows the communication by not fail until to a certain point then it dies!!! You cannot do the check, verify and request to resent in analog communication, you loss it, it's lost. That is also part of the reason of the delay when you see news that the anchor interview a person in a remote location. Notice I highlight "part". Because there are other reason like they pipelining different channels, bundling and all sort of processes during transmission. But bad link will cause seconds of delay due to the re-transmissions. 10 years ago when I was designing hardware for SONET OC48 and OC192, I read into SONET and ATM. I believe they both do that. But as usual most of the stuff I read leaked out from the back of my head already!!!!

    As I said, I am not an expert on this, this is just my understanding from the limited knowledge.
     
    Last edited: Nov 5, 2011
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Digital vs Analog - Noise Distortion
Loading...