I read this: ===== Nyquist Interval of the opening chapter Historical Background: "If the essential frequency range is limited to B cycles per second, 2B was given by Nyquist as the maximum number of code elements per second that could be unambiguously resolved, assuming the peak interference is less half a quantum step. This rate is generally referred to as signaling at the Nyquist rate and 1/(2B) has been termed a Nyquist interval." ===== I understand how the Nyquist theorem applies to signals that are sampled. I understand aliasing to a certain degree. What I do not understand is how it matters to signals like the telegraph, when there is no sampling going on. - Why does a channel for things like telegraph, what the bandwidth is? 1 Hz is enough to represent a pulse. - Why do we worry about the bandwidth of a telegraph, with respect to the number of pulses per time period? These aren't sampled, so how can aliasing occur? Why can't we pulse a million of them a second over 1Hz? - Related, why does FSK/PSK have an advantage over ASK/OOK? Why not just take 1 Hz of bandwidth and put a carrier up and take it down a million times a second? Please answer so that I can understand intuitively. I've seen the formulas, but nobody can explain to me why it is that way. Thank you
Most folk associate Nyquist with data sampling which is unfortunate. His actual work was in the field of communications "how much data can I pass through a bandlimited channel?". His answer, as you indicated, is that the bits/sec is twice the bandwidth. I will give an intuitive, totally informal, argument that hopefully you can appreciate. Given binary data, which pattern represents the highest frequency? The 101010... pattern is a squarewave whose fundamental frequency is half of the bit rate. If this is low-pass filtered using an ideal brick-wall filter we get a sine wave, the 1s and 0s are still recoverable since since we are only "looking" at the signal at its peak and trough (center of the eye in datacom parlance). Any other pattern involves lower frequencys. For example 11001100 pattern is a squarewave at 1/4 the bitrate. Thus the 1010.. pattern fundamental frequency is as high as we need to go. All channels have some bandwidth, it is an indication of how rapidly signals can change and make it through the channel. If you send a rectangular pulse through a 1Hz channel (or a 1Hz low pass filter), the output will not be a rectanglar pulse but a gradual bump with a long, ringing, tail. If you send a rapid sequence of rectanglar pulses through the 1Hz channel the "tails" of the earlier pulses will overlap the bumps of the later pulses. The pulses kind of get smeared together (inter-symbol interference). You are right, there is no sampling, this has nothing to do with sampling. The bandwidth of the channel is an indication of how fast you can change the signal and still have the change make it through to the channels output. Given a bandwidth of 1Hz, if you launch a signal that is wiggling up and down a million times per second, then the signal at the output of the channel will not be wiggling very much (lots of attenuation). Think about the diaphragm in a loudspeaker. If the speaker has a bandwidth of 4KHz, and you apply an electrical signal that varys at 1MHz, the diaphragm will barely move, its inertia will not allow it to keep up. Again, you cannot put the carrier up and down a million times per second in 1Hz bandwidth, this bandlimiting only allows very slow changes to the signal over time. That does not mean that you cannot send 1Mbps over 1Hz bandwidth. You can (in principal, not practice) if you have a 2^(1 million) voltage thesholds. Your signal is meandering in a way such that once per second you have arrived at the next threshold which defines a million bits of information. You are not looking at the meanderings, you are just looking once per second when the signal has arrived at its new threshold. Thus your movement is slow enough to be propagated through a 1Hz bandwidth channel, but you would need extraordinary signal to noise ratio to discriminate between that many thresholds. This concept goes beyond Nyquist (it is part of Shannon's).
Hm.. thank you. I am still going over this again to try and understand. But I can see that this is helpful. I appreciate it :) Another thing -- bps=2B. But I thought that we actually can do better than this, but modulating Q and I, ie. QPSK, and other modulation techniques? Or is the ceiling 2B, and things like PSK are much more inefficient than 2B? Thanks again
The bandwidth of a channel limits the max frequency you can put down it. Let's say the channel of bandwidth B has a perfect cutoff (rectangular filter), that is, below frequency B the signal is perfect, and above frequency B no signal passes. If the bandwidth of the channel is 1Hz, you cannot put any signal through that is higher in frequency that 1Hz. The channel capacity is not 2B. 2B is the sample rate needed to unambiguously sample a signal with frequency B. If I modulate a carrier of freq B with signal A, I need to sample at 2B to recover both the carrier and the modulating signal. If I sample at B, I will alias the carrier to 0 and can recover signal A. Hope that helps some.
Harry Nyquist's bps=2*Bandwidth criterion dates back to 1928. Telegraph was the primary form of communication and Nyquist was essentially studying the problem of how rapidly dots and dashes could be sent over band limited channels (working for AT&T Bell Labs). He derived a number of "Specific Criterion of Distortionless Transmission" involving "DC Waves" (telegraph dots and dashes). In this case "distortionless" meant that there would be zero intersymbol interference. In addition, his channels were assumed to be linear and noise free. However, "Distortionless" is not our goal in data communications. The goal is data recovery with 100% fidelity. We can accept a certain amount of distortion and noise and recover data with 100% fidelity. This is what Shannon's work showed 20 years later. So, yes, we can do better than bps=2B, and signal to noise ratio plays a part. Back to Nyquist's result: How did he demonstrate zero intersymbol interference (zero distortion) with bit rate = 2*bandwidth? I mentioned earlier that pulses sent through bandwidth limited channel produce a ringing waveform. The frequency of the ringing is related to the filter bandwidth. Specifically, if you have a 1Hz brickwall filter (or many other kinds of filters), the zero crossings of the ringing occur every 1/2 second. If your data rate is 2bps then the center of each data bit can coincide with the zero crossings of the ringing produced by all other data bits. In other words each bit produces a ringing waveform which will not interfere with any later bit because it will have a zero crossing right in the middle of every subsequent bit interval. Of course, part of this criterion is that we only look at the bits in the center of the bit interval (center of the eye). This "distortionless" aspect of Nyquist's work, which makes it incomplete from a communication theory perspective, is what ties it mathematically to the more will known sampling theory (which came around later) whereby if we sample a signal above the "Nyquist rate" we can reconstruct a distortionless copy (in the absence of noise).
I wish I understood this better. I will keep reading these posts and hopefully some of it will become more clear. Thank you for helping :)
Sorry to hear that you are still struggling with this oneamp, One of the limitations of a forum like this is a lot of the communication winds up being text. If I were standing in front of a blackboard with you beside me I could probably do a better job conveying the idea. Anyway, to just summarize: the bps = 2*BW rule, if followed, allows us to transmit data without distortion (as defined by Nyquist). A better rule, which allows distortion and incorporates signal to noise ratio, was developed by Claude Shannon about 20 years later. However, Nyquist's analysis, even though originally conceived to provide distortion free communications, turned out to be applicable to sampling theory (and this is what most folk associate Nyquist with although he never actually worked on sampling theory). Maybe ask more questions? There are lots of folks here who can chime in and help. And don't fret, it is actually a very complex topic.
I think you guys have done a great job helping me. I just need to concentrate on it for awhile. I really do appreciate you taking the time to type up the information for me, and I will be studying it!
The rise time of a signal in a channel is proportional to the reciprocal of the channel bandwidth. That sets the maximum data change rate. For narrow channels at high frequencies, the number of cycles needed for the change is the reciprocal of the Q of the channel.
The amount of information that is carried in an analogue channel is potentially very high. The actual signal level at any time can be measured to many significant digits and each digit is, potentially, a piece of information. A binary signal just carries 1 bit of informations, per 'symbol' and is the simplest form of signalling. You can use multiple levels for digital signalling, which will increase the rate at which you can send information (at least in principle). There is a limit, though, which Shannon tell us is set by the level of the added noise. Lower channel noise means more levels can be used and more information can be sent in any given channel bandwidth. The system just needs to be much more complex and intelligent to achieve this. Most signalling systems operate well below the limit imposed by Shannon. PSK and multilevel ASK systems are used - and also, combinations of the two, to provide a 'constellation' of signalling states, which increases the capacity of a channel (at the expense of noise performance, of course).
I think the OP was hung up on the bps=2*bandwidth rule. He was reading historical information indicating that this was Nyquist's a rule for defining bps given bandwidth (which is was), and trying to reconcile this with its common application to sampling theory.