Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Can Someone Explain Jitter Commonsensically?

  1. Jun 10, 2006 #1

    Les Sleeth

    User Avatar
    Gold Member

    Can Someone Explain "Jitter" Commonsensically?

    What mystifies me is why jitter correction should make such a difference to the sound quality of music.

    Based on reviews of this product . . . http://www.monarchyaudio.com/DIP.htm . . . I bought one used for a mere $150 to see if it would help. Not only did it help, it BY FAR was the most dramatic improvement in the sound quality of my music system. Every single CD I have jumped leaps and bounds in quality.

    If you research jitter, you get quick explanations about how pulses are messed up, spread out over time, etc., but I still can't explain it properly to my friends. Can anybody explain in lay terms why jitter correction would have such an improved effect on music?
    Last edited: Jun 11, 2006
  2. jcsd
  3. Jun 12, 2006 #2


    User Avatar

    Staff: Mentor

    I only get involved with jitter when doing voip, but this may be an easy explanation.

    "Although a Compact Disc or digital audio tape doesn't have jitter per se, the recorded digital data carries the effects of jitter produced by the ADC. The samples were taken at nonuniform time increments; when those samples are fed to a DAC with a uniform (jitter-free) clock, the sampled original analog signal is not accurately reconstructed. In a DAC fed samples taken at uniform time increments, jitter produces a similar skewing of the samples in time. Again, the reconstructed analog signal doesn't accurately represent the original audio signal."

  4. Jun 13, 2006 #3


    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    As Evo said, jitter is inconsistency in the time between each sample of a digital signal. If you look at a signal on an FFT, the effect of jitter is to smear out peaks in the spectrum, reducing their peak power and spreading it over a range of nearby frequencies. This causes frequencies that should be distinct (say, the overtone series for a number of different instruments in a symphony) to overlap, ruining the more subtle features of the instruments' timbre.

    (I'll also note that the entire concept of a "jitter reducer" is hogwash. Once an analog signal has been digitized with a jittery clock, information has been lost. You cannot reconstruct a "jitter-free" digital signal from one which has been poorly digitized. It doesn't really matter how many stars the product got from obscure audiophile magazines; it's not physically possible.)

    - Warren
    Last edited: Jun 13, 2006
  5. Jun 13, 2006 #4

    Les Sleeth

    User Avatar
    Gold Member

    Hmmmmm, exactly the improvement I observe. Every instrument is made more distinct, every note more clear. Most of the recordings I am comparing I've listened to for years, so I can easily tell the difference. Plus, I've been upgrading all aspects of my system, one component at a time, with the Monarchy DIP the last after adding everything from new cables and DAC to power amps and speakers.

    Of course. I shouldn't have said jitter "correction," which Monarchy doesn't claim, but rather they claim jitter "supression." However, Stereophile is hardly an obscure audio magazine! It is the recognized standard for audiophiles; and Stereophile is not the only magazine that praised it. I have yet, in fact, to find a single critique by an audiophile (and there are dozens of them to be found if you Google) that wasn't positive overall both in listening experience and measurements (when taken).

    Perhaps I should have asked why the Monarchy DIP works, when so many attempts to improve the jitter situation have failed. One case in point was the high-end audio company Theta (who's DAC I use), who produced a so-called jitter reduction unit which tests showed actually increased jitter.

    In the excellent article Evo referenced (how did I overlook that one?), the author (Dr. Rémy Fourré) suggests how reclocking devices like the Monarchy DIP might help:

    "A more significant appraisal of input-receiver performance, however, is how well it rejects jitter in the incoming signal. Does the input receiver pass jitter from the transport to the recovered clock, or does it attenuate it? With well-designed transports and impedance-matched transmission lines, the intrinsic jitter dominates. But with poorly designed transports and impedance-mismatched transmission lines, the incoming jitter not rejected by the PLL becomes the dominant factor. The jitter in the recovered clock (what we're really concerned about) is thus a function of the transport's jitter, the interface-induced jitter, the input receiver's ability to reject incoming jitter, and the input receiver's intrinsic jitter.

    By their nature, PLLs reject incoming jitter only above a certain frequency, called the jitter attenuation cutoff frequency. Consequently, we must consider both the receiver's intrinsic jitter and its jitter attenuation cutoff frequency when specifying an input receiver's performance. The single intrinsic jitter specification doesn't tell the whole story.

    Barring design and layout errors, the jitter performance of a digital processor is primarily dependent on the input receiver stage. The best currently available monolithic (chip) input receiver is the Crystal CS8412. This has an intrinsic jitter of 200ps and a jitter attenuation cutoff frequency of 25kHz. With no input signal, the CS8412 will introduce 200ps of clock jitter. Jitter in the incoming data stream with a frequency below 25kHz will be passed to the recovered clock. The performance achieved by the best currently available hybrid digital audio receiver (UltraAnalog AES 20) is typically 40ps for intrinsic jitter and 1kHz for jitter attenuation cutoff frequency.

    These criteria for specifying input receiver performance also apply to "reclocking" devices that claim to reduce jitter in the data signal. These devices receive an AES/EBU or S/PDIF input signal, recover the clock from this signal, and reclock the output signal with this improved clock. Just as with an input receiver, we want to know the device's intrinsic jitter and its jitter attenuation cutoff frequency. A reclocker can only be as good as the input receiver it uses.

    Reclockers have two major limitations. First, if the CD transport's intrinsic jitter is less than the reclocker's intrinsic jitter (in the DC-40kHz band), the reclocker can only add jitter. A reclocker can improve high-jitter transports, but it degrades low-jitter transports (footnote 8). Although the output clock may appear much cleaner on an oscilloscope when high-frequency jitter is attenuated, it doesn't mean there is less jitter in the DC-40kHz band. Remember, only jitter in this band can affect a multi-bit converter's sonic performance. Only a spectral analysis of the jitter can reveal whether these devices improve or degrade the signal.

    The second limitation of reclocking devices is the reclocker's jitter attenuation cutoff frequency. If it isn't lower than that of the digital processor's input receiver, the reclocker will simply pass incoming jitter at frequencies which will be rejected anyway by the digital processor's input receiver."

    In any case, thanks to you and Evo for clarifying the issue.
    Last edited: Jun 13, 2006
  6. Jun 14, 2006 #5
    In networking jitter is the "change in lattency with time". IE with VoIP if packets/frames are not recieved at the hearing end in the correct order and with the same lattency you wont be able to make out what the person who was talking was saying in a worse case senario...

    Not within Networking it isnt. You can reduce Jitter by deploying a means of traffic prioritisation, on the network talk-path, between the transmitter and reciever. You can also reduce Jitter by deploying better queing systems on your Layer 3 devices (routers L3 switches) for example Class-Based, Weighted Fair Queuing...

    Anyway this is how we define jitter in the wonderful world of Networking
  7. Nov 27, 2007 #6
    I like to hear some more about how this is done. Do you have good references?
    I'm currently working on a kind of Recurrent Neural Networks which would be able to predict time delay between pulses, and hopefully, also time jitter? I know nothing about jitter, but my idea is: if I can tell you beforehand at what time the peak of a pulse will arrive, then I can correct your jitter, right?
    How is jitter correction done in common practice?
  8. Dec 2, 2007 #7
    This would be true if the jitter reducer was claiming to suppress jitter associated with the original ADC. As you say, that's physically impossible (unless, of course, you also had access to the clock signal used in the ADC in the first place). But that's not what they're doing: there are other instances of jitter in a digital audio chain, and that's what they're after. Specifically, these products try to reduce jitter in the DAC. The situation they have in mind is that you have some digital audio system (workstation, cd players, whatever) that spits out a digital audio signal, which also has an associated clock signal, which is then fed into the DAC. The issue is how the DAC deals with the incoming clock signal: in the simplest case, it uses the incoming clock directly in the conversion process, with the result being that any jitter in the incoming clock degrads the output quality. These jitter reducer products are essentially high-quality clock resynthesizers using phase locked loops. So the idea is that if you have a crummy clock at the digital output of your digital audio system, the jitter reducer can clean up the clock signal before it hits the DAC.

    However, there are much better ways of getting the same, or better results. One obvious way would be to use a digital audio system that doesn't have a jittery output clock in the first place. Another (highly recommended) approach is to use a DAC that employs its own (low-jitter) interal clock, which avoids any considerations about jitter at the input. The DAC employs the input clock to buffer and synchronize the data, but then uses a stable local clock to drive the actual digital-to-analog conversion. This comes at the cost of some extra delay, but is generally preferred, as it results in the best performance. That said, an external jitter reducer can be a cost-effective option, as upgrading one's digital audio system or DAC can be prohibitively expensive. But it has no place in professional or audiophile set-ups; it's a patch for limiting the damage from poor/compromised system components/design. Jitter reducers can not help, and may well hurt, the performance systems and components that were designed for high quality at the outset.
  9. Dec 4, 2007 #8
    Tsunami, its not corrected in IP networks, its prevented, by allocating resources before transmission. In telephoney they use the intserv QoS model and protocols to create an end to end circuit that reserves resources for the telephone call prior to the telephone call being allowed to be setup. The same *can* be done on IP networks (for example MPLS RSVP-TE), but it isnt due to scalability issues. So instead the QoS diffserv model, a per-hop model is used. This still allows for a reservation of resources to be allocated on each network device but each device is independent.

    I think you are talking about correcting errors in modulations rather than the error due to "network" behavior?

    Anyway google intserv or difserv QoS, and if you really are interested I have 1000 page PDF's of how QoS works and how to implement it on Networks.

    PS hello PF been awhile ;)
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook