# Nyquist Frequency

Number2Pencil

## Homework Statement

There seems to be varying answers on what the definition of "Nyquist Frequency" is, between my professor, myself, and various internet sites.

One group says:

"The Nyquist Frequency is 1/2 the sampling rate"

and the other group says:

"The Nyquist Frequency is twice the signal's bandwidth"

Both groups agree that your sampling rate must be twice the signal's bandwidth to avoid aliasing, but when a problem simply says "the sampling rate is x, and the signal's bandwidth is y, what is the Nyquist Frequency?" I get different answers depending on which definition I use.

Which is the correct definition, and how can I argue this to my professor?

## The Attempt at a Solution

Sources, group 1:

http://en.wikipedia.org/wiki/Nyquist_frequency
http://www.encyclopedia.com/doc/1O13-Nyquistfrequency.html

Sources, group 2:

http://www2.egr.uh.edu/~glover/applets/Sampling/Sampling.html

Staff Emeritus
To me there is no right or wrong. Both are right. Words can have multiple meanings, even in a technical context. What's wrong is arguing that only one sense is right, the other wrong.

Picture yourself as a designer of some digital system. Here you are worried about what the sampling frequency has to be. You find the highest frequency of concern, double it to yield the minimum sampling frequency, and then set the sampling frequency even higher than that for safety. The Nyquist frequency is based on system design parameters, and it is this dictated by design Nyquist frequency that defines the sampling frequency.

Now picture yourself as an operator of this system. The system was built and deployed long ago, and it's getting a bit on in years. Things are getting a bit rough on the edges. Can you see some noise signal in the telemetered data? If the frequency of that noise is more than half the sampling frequency, no you can't. Now the sampling frequency is set in the design. You can't change it, so now the sampling frequency defines the Nyquist frequency.

Homework Helper
Gold Member
1.

Both groups agree that your sampling rate must be twice the signal's bandwidth to avoid aliasing, but when a problem simply says "the sampling rate is x, and the signal's bandwidth is y, what is the Nyquist Frequency?" I get different answers depending on which definition I use.

EDIT: there is confusion between Nyquist frequency and Nyquist rate.

The Nyquist frequency is by definition half the sampling rate, since that frequency and all frequencies below that frequency will not be aliased when sampled.

The Nyquist rate is the sampling rate at which a given signal will not be aliased.

The N. rate is not simply twice the max. signal frequency. For example, if a signal contains W Hz lying between mW and (m+1)W Hz, m an integer, i.e. a pass-band signal, the Nyquist rate is 2W, not 2W(m+1).

I would advise not paying too much attention to the nomenclature and concentrate instead on the theory.

The sampling rate can be anything. In fact, deliberate undersampling is often done. The answer is that the Nyquist rate is twice the highest frequency in a baseband signal (extending from 0 to f).

But if the frequency range is limited to f1 to f2 then the Nyquist rate (min. sampling frequency to avoid aliasing) assumes a more complex formula, almost always being < 2f2.

Last edited:
Homework Helper
Gold Member
. Can you see some noise signal in the telemetered data? If the frequency of that noise is more than half the sampling frequency, no you can't.

???
Maybe want to rewrite that.

Number2Pencil
I am more than happy to just accept that as the answer and move on, but how did you determine that your answer was the correct choice?

Homework Helper
Gold Member
I am more than happy to just accept that as the answer and move on, but how did you determine that your answer was the correct choice?