What method does a receiver or transmitter use to approx....

In summary: More is better because we can filter out noise better if we have more samples.Now, the FFT requires a fixed number of sample points. Let's say you have 1024 samples of a waveform. The FFT will take these 1024 samples and break them into 512 pairs. Each pair represents a sine wave and a cosine wave that are each one period long in the time domain. In the frequency domain, each of these pairs is represented by an amplitude and a phase (in radians).In summary, The conversation discusses the use of digital FFT in sending and receiving signals, and the methods used for approximating the instantaneous values of the signal into a mathematical formula. It also touches upon the
  • #1
tim9000
867
17
Hi,
I'm just curious because I know wifi uses digital FFT to send and receive signals. (I can't really remember why)
But when I imagine a signal being sent its like a squiggily wave, so what method does the reciever use to approximate the instantanious values of the signal into a mathematical formula, or doesn't it bother?

What is an example method if you had a signal coming in, that you could use to approximate a mathematical f(x) function out of it, to use in a Fourier transform?

I assume that when a processor does an FFT on a list of numbers (from the time domain) it generates another list of numbers (that are in the frequency domain)?
 
Engineering news on Phys.org
  • #2
tim9000 said:
Hi,
I'm just curious because I know wifi uses digital FFT to send and receive signals. (I can't really remember why)
But when I imagine a signal being sent its like a squiggily wave, so what method does the reciever use to approximate the instantanious values of the signal into a mathematical formula, or doesn't it bother?

What is an example method if you had a signal coming in, that you could use to approximate a mathematical f(x) function out of it, to use in a Fourier transform?

I assume that when a processor does an FFT on a list of numbers (from the time domain) it generates another list of numbers (that are in the frequency domain)?
With gross simplification, there are two main processes involved in sending a signal, such as video, by digital transmission. Firstly, the analogue signal is sampled at least twice for each of its cycles, and the value of each sample is turned into bits representing a binary number. This used to be called Pulse Code Modulation. Secondly, the bits are used to modulate a radio transmitter. The receiver then demodulates the incoming signal, so it has the original bit stream, and re-builds a copy of the original analogue signal one sample at a time. All this can be done without FFT, but modern receivers use FFT for things like filtering and demodulation because it works well and means less hardware and more software, which is today's preference. (In previous times we similarly preferred horses or steam for everything). In practice, it is possible to greatly reduce the number of bits to be sent by using compression algorithms - anything repetitive or which the viewer may not notice can be deleted.
 
  • Like
Likes Jeff Rosenbury and berkeman
  • #3
tech99 said:
With gross simplification, there are two main processes involved in sending a signal, such as video, by digital transmission. Firstly, the analogue signal is sampled at least twice for each of its cycles, and the value of each sample is turned into bits representing a binary number. This used to be called Pulse Code Modulation. Secondly, the bits are used to modulate a radio transmitter. The receiver then demodulates the incoming signal, so it has the original bit stream, and re-builds a copy of the original analogue signal one sample at a time. All this can be done without FFT, but modern receivers use FFT for things like filtering and demodulation because it works well and means less hardware and more software, which is today's preference. (In previous times we similarly preferred horses or steam for everything). In practice, it is possible to greatly reduce the number of bits to be sent by using compression algorithms - anything repetitive or which the viewer may not notice can be deleted.
Ah, yeah so I suppose filtering using software instead of components would be the big advantage.

One reason I ask is because when I learned about Fourier transforms a few years ago I generally had a mathematical function f(x) to transform. But what you're discribing, it'd be the transform of a bunch of individual (discrete I suppose) samples, rather than a function, wouldn't it?
 
  • #4
Right. You just realized the difference between a continuous Fourier transform and a discrete Fourier transform (DFT) as implemented by the FFT.
https://en.wikipedia.org/wiki/Discrete_Fourier_transform

The sample points actually represent the input function for a discrete Fourier transform (its input sequence). Digital signal processing is based on sampling theory where a signal is represented by a series of discrete impulses, called samples, rather than by a continuous function. Again. the sequence of samples IS the input function.

WIFI demodultaion uses the FFT to implement many filters to detect the 64 carriers in its OFDM-64 channels. Essentially it is transmitting on 64 frequencies at the same time. The FFT is the fastest way to detect carrier amplitude and phase on all 64 channels.
 
  • #5
meBigGuy said:
WIFI demodultaion uses the FFT to implement many filters to detect the 64 carriers in its OFDM-64 channels. Essentially it is transmitting on 64 frequencies at the same time. The FFT is the fastest way to detect carrier amplitude and phase on all 64 channels.
That's a bit over my head, how it can tell the amplitude and phase of each channel sounds like magic to me :-|

meBigGuy said:
The sample points actually represent the input function for a discrete Fourier transform (its input sequence). Digital signal processing is based on sampling theory where a signal is represented by a series of discrete impulses, called samples, rather than by a continuous function. Again. the sequence of samples IS the input function.

With the DFT, It's how the samples relate to each other to make data that I can't really comprehend, you've got a list of sample amplitudes xn, that make a list of values xk, is there anything actually linking them together as a series or part of a waveform, other than the sample number? My issue is 'they just seem like numbers', maybe I'm forgetting the implications of frequency domain.
 
  • #6
First we will talk about time domain representations of signals by a sequence of numbers.

The first rule of sampling theory is the nyquist theorem. Any sequence of numbers will perfectly represent a continuous waveform (perfectly) if you assume no frequencies higher than half the sample frequency. Another way to say that is that you must provide at least 2 samples per period of a wave form to reproduce it exactly. One half the sample frequency is called the nyquist frequency. Sampling systems must filter their inputs such that there are no frequencies above the nyquist frequency. That filter is called an anti-alias filter.

So what this means is that a sequence of sampled numbers can ALWAYS be converted to the original continuous waveform if the nyquist theorem was obeyed.
So, think of a sine wave. Sample it at 10 equally spaced points within the period. Those points looks like the original sine wave, and there is no way any signal other than the original sine wave could produce those points if there are no frequencies higher than 1/2 that sample frequency (any "distortion" between points will result in higher frequencies, which are not allowed). So those samples uniquely represent the sine wave.

https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem

So, now I have a sequence that represents a single frequency. The DFT can operate on that and give me the frequency domain representation of that sinewave (its ampliture and phase)

tim9000 said:
That's a bit over my head, how it can tell the amplitude and phase of each channel sounds like magic to me :-|

If I have a 1000 point signal, roughly said, the DFT can turn it into a 1000 point spectrum with amplitude and phase of 1000 frequencies. I can grab groups of points and determine the energy in that band of frequencies.
 
Last edited:
  • #7
Great reply, sorry I've been meaning for so long to get back to this.

So obeying the nyquist theorem eliminates misconstruding noise on a signal for other frequencies?
So even digital packets uses carrier waves?
Say I want to get data on a 100kHz wave and/or 50kHz, I sample it at 200kHz. Are you saying if there is noise on it, that looks like higher frequencies and we say 'no, that's not allowed so it must be noise'?
Like this:
pfnow.png


meBigGuy said:
So, now I have a sequence that represents a single frequency. The DFT can operate on that and give me the frequency domain representation of that sinewave (its ampliture and phase)
.
I'm still a bit hazy on this. Say I'm sampling away at some high frequency, I have a set of evenly spaced samples in time, and I can see by the amplitudes of each sample that the points look like a signal at half, or maybe a quater of the frequency of my sampling rate. I can see that graphically as a human, but how does a computer say 'oh yeah that's a whatever frequency wave'? (I know the phase is intrinsically revealed)

meBigGuy said:
If I have a 1000 point signal, roughly said, the DFT can turn it into a 1000 point spectrum with amplitude and phase of 1000 frequencies. I can grab groups of points and determine the energy in that band of frequencies.
Could you manybe maybe explain with a picture? Do you mean like 1000 data samples? Are you saying you're like putting it into frequency domain, to look at how much of the various frequencies are present?

Thanks!
 
  • #8
tim9000 said:
'm still a bit hazy on this. Say I'm sampling away at some high frequency, I have a set of evenly spaced samples in time, and I can see by the amplitudes of each sample that the points look like a signal at half, or maybe a quater of the frequency of my sampling rate. I can see that graphically as a human, but how does a computer say 'oh yeah that's a whatever frequency wave'? (I know the phase is intrinsically revealed)

Well, I gave a simplified example. Technically the points represent the lowest possible frequency that has all the points, and its harmonics (which also have all the points). But, just assume for now that the higher frequencies are filtered out.

tim9000 said:
Could you manybe maybe explain with a picture? Do you mean like 1000 data samples? Are you saying you're like putting it into frequency domain, to look at how much of the various frequencies are present?

If I have a frequency domain representation if a signal, and a frequency domain representation of a filter, if I multiply them point by point in the frequency domain I have essentially applied the filter. Say the filter is value 1 from 0 to 1KHz and value 0 after that. That's a "brick-wall" filter that just zeros out all the energy above 1KHz. You can then do the inverse fft of the result to get the time domain representation of the filtered signal.

It really gets to be fun when you realize that multiplication in the frequency domain is convolution in the time domain. I'll let you look that one up.

A lot of the stuff I've explained is a simplistic view. To really understand this accurately you need to study Digital Signal Processing. That is the mathematics of sampled data systems. What is a sampled data system, what properties do those data sequences have, and when I manipulate them, what is really happening? What is sampling noise, and how do I control that? What is imaging and aliasing? What is interpolation and decimation, and how is it used. (actually, read the course description for any DSP class)

In the continuous domain you can represent systems with laplace transforms (s domain). In discrete systems one uses the Z-transform.
[PLAIN]https://en.wikipedia.org/wiki/Digital_signal_processing[/PLAIN]
https://en.wikipedia.org/wiki/Digital_signal_processing

There are online courses. http://ocw.mit.edu/resources/res-6-008-digital-signal-processing-spring-2011/
 
Last edited by a moderator:
  • #9
meBigGuy said:
Well, I gave a simplified example. Technically the points represent the lowest possible frequency that has all the points, and its harmonics (which also have all the points). But, just assume for now that the higher frequencies are filtered out.
If I have a frequency domain representation if a signal, and a frequency domain representation of a filter, if I multiply them point by point in the frequency domain I have essentially applied the filter. Say the filter is value 1 from 0 to 1KHz and value 0 after that. That's a "brick-wall" filter that just zeros out all the energy above 1KHz. You can then do the inverse fft of the result to get the time domain representation of the filtered signal.

It really gets to be fun when you realize that multiplication in the frequency domain is convolution in the time domain. I'll let you look that one up.

A lot of the stuff I've explained is a simplistic view. To really understand this accurately you need to study Digital Signal Processing. That is the mathematics of sampled data systems? What is a sampled data system, what properties do those data sequences have, and when I manipulate them, what is really happening? What is sampling noise, and how do I control that? What is imaging and aliasing? What is interpolation and decimation, and how is it used. (actually, read the course description for any DSP class)

https://en.wikipedia.org/wiki/Digital_signal_processing

There are online courses. http://ocw.mit.edu/resources/res-6-008-digital-signal-processing-spring-2011/
Right, so the point is that if I've got a signal sampled atleast at twice the frequency of the signal, even if parts of it are missing and there's noise, I can regreate the original fundamental frequency dispite this. But/So the processor receiving the signal won't think 'ahaa it's a function of this frequency', it'll just join up all the sampled points (including if noise happens to be sampled)? [there's some complex algorithm for determining what the samples mean?]
So did digital packets of data still use a fundamental carrier wave, of which is sampled at atleast twice the frequency?Yeah ok, so I can plonk two signals ontop of each other in time domain when I multiply them together in frequency domain. To filter, what have you
 
  • #10
Well, you need to be careful about what you are saying regarding noise. What you have sampled is what you have sampled. If you reconstruct it in the time domain by generating impulses and filtering at nyquist you will have the original signal, (a signal, which which if sampled at nyquist, would have created those samples).

There is no cocept of "even if parts of it are missing and there is noise". If you take a 10X oversampled sequence for a sine wave and zero out samples in the middle, the reconstructed wave will have missing samples in the middle. But no components above nyquist assuming you filter the impulses at nyquist..

If you take an analog signal with noise components above nyquist and sample it, you will get aliasing which essentially takes the "above-nyquist" noise and translates it to "below nyquist" noise.

As for "fundamental carrier wave", that is a different subject. Now you are talking of modulation and encoding schemes that may or may not use a carrier at their baseband frequency. But generally one conceptually starts with a carrier frequency, even if it is DC. The resulting modulated signal could be SSB, DSB, or whatever after using QAM, ODFM or however the signal is produced. That baseband digital data signal is then modulated onto an RF carrier.

Conceptually it can be as simple as Morse-Code-like 0's and 1's causing a 2.4Ghz carrier to switch on and off. If I mix to DC I can then sample the resulting pulses at a frequency of at least twice the morse code frequency.
 
  • #11
meBigGuy said:
Well, you need to be careful about what you are saying regarding noise. What you have sampled is what you have sampled. If you reconstruct it in the time domain by generating impulses and filtering at nyquist you will have the original signal, (a signal, which which if sampled at nyquist, would have created those samples).

There is no cocept of "even if parts of it are missing and there is noise". If you take a 10X oversampled sequence for a sine wave and zero out samples in the middle, the reconstructed wave will have missing samples in the middle. But no components above nyquist assuming you filter the impulses at nyquist..

If you take an analog signal with noise components above nyquist and sample it, you will get aliasing which essentially takes the "above-nyquist" noise and translates it to "below nyquist" noise.

As for "fundamental carrier wave", that is a different subject. Now you are talking of modulation and encoding schemes that may or may not use a carrier at their baseband frequency. But generally one conceptually starts with a carrier frequency, even if it is DC. The resulting modulated signal could be SSB, DSB, or whatever after using QAM, ODFM or however the signal is produced. That baseband digital data signal is then modulated onto an RF carrier.

Conceptually it can be as simple as Morse-Code-like 0's and 1's causing a 2.4Ghz carrier to switch on and off. If I mix to DC I can then sample the resulting pulses at a frequency of at least twice the morse code frequency.
Ah,
Orthogonal frequency-division multiplexing"
interesting.

Right, so any noise will show up in the signal. I'm still hung up on the point that 'you can recreate the signal perfectly' as long at it was sampled above the critical frequency. To recreate it I'm assuming you need to think about what it must have been, so would I be right in saying that the processor doesn't try to determine what frequency it was getting a signal on, it should already know that and be looking on that frequency, that it recreates data using paroty bits or some other method of varification, then if it is suspicious of the signal it received then it just asks for it again until it is happy nothing got lost on the way?

Also, so how does it determine the frequency of what it is sampling? Is it some complicated algorythm? (a picture, maybe something like what I posted
tim9000 said:
pfnow-png.87076.png
would be helpful if there is some graphically translatable explanation)

Thanks
 
  • #12
tim9000 said:
Right, so the point is that if I've got a signal sampled atleast at twice the frequency of the signal, even if parts of it are missing and there's noise, I can regreate the original fundamental frequency dispite this. But/So the processor receiving the signal won't think 'ahaa it's a function of this frequency', it'll just join up all the sampled points (including if noise happens to be sampled)? [there's some complex algorithm for determining what the samples mean?]
So did digital packets of data still use a fundamental carrier wave, of which is sampled at atleast twice the frequency?Yeah ok, so I can plonk two signals ontop of each other in time domain when I multiply them together in frequency domain. To filter, what have you

Sampling at twice the frequency is a theoretical limit. Real systems over sample. This makes up for errors like not getting the voltage level to the nV or not having the timing perfect down to the fs.

Carrier waves are another matter. Most electronic communication systems use them as a reference, but some don't. Still, I'm not sure they matter much to DSP.

BTW, you cannot perfectly recreate the signal. All signals have noise and noise spreads all across the spectrum. So some of the noise will be above the sampling frequency and not be recreated. Further this noise will lead to the sample being off by some small amount which will distort the signal slightly. (Analog methods also add some amount of distortion, so it's not a big deal, just understand it's a question of quantity. Nothing is perfect.)

Digital signals are easy; that's why we use them. Any distortion is likely small enough to not change the reading of a bit. Even on the rare occasions when it does, the message is easily checked and corrected by forward error correction (including redundant data) or reverse error correction (retransmitting the garbled message).

An FFT is a method of running a Fourier Transform on a data set. It converts time domain data to frequency domain data (or the other way around). It has lots of limitations that need to be understood. The biggest of these is that it is not real time. You need to collect a sample for some length of time before you can run the transform. The transform then operates on that dataset. So if you collect for ten seconds, you are seeing the spectral content for the entire ten seconds. Even if you update the set every second (dropping the oldest second's data) you still see into the past, not the present. With some thought we can see how this relates to the Nyquist Rate and also the Heisenberg Uncertainty Principle.

I am always awed by how well the standard model fits together across apparently unrelated fields of study.
 
  • #13
tim9000 said:
Also, so how does it determine the frequency of what it is sampling? Is it some complicated algorythm? (a picture, maybe something like what I posted

Generally it doesn't, unless it is specifically trying to determine the frequency. An FFT, or some other algorithm would do the job. Maybe average zero crossings, apply a filter, etc. Generally there is no single frequency, but, rather, bands of signals.

Jeff Rosenbury said:
All signals have noise and noise spreads all across the spectrum. So some of the noise will be above the sampling frequency and not be recreated.
Careful about what you say. Signals above the nyquist (1/2 the sample frequency) will be aliased back into the sub-nyquist band. Other than aliasing though, and within the limits of sampling noise, the input signal can be perfectly recreated from the samples. Nothing is is totally missed, in fact the energy of the signal + noise + out-of-band will be the same when it is signal + noise + aliased-out-of-band.

Just to clarify, when I said "perfect" I was alluding to the fact that sampled sub-nyquist signals contain ALL the information needed to perfectly create the original signal. There is nothing inherently lost in the digitization and recreation other than the limits of the sampling noise based on the number of bits. Note that in a system with adequate dither you can actually extract a signal that was originally less than 1 bit.

Now, no system is implemented perfectly, but do not confuse that with the theory of sampled data systems.

https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem
 
Last edited:
  • #14
tim9000 said:
Hi,
I'm just curious because I know wifi uses digital FFT to send and receive signals. (I can't really remember why)
But when I imagine a signal being sent its like a squiggily wave, so what method does the reciever use to approximate the instantanious values of the signal into a mathematical formula, or doesn't it bother?

What is an example method if you had a signal coming in, that you could use to approximate a mathematical f(x) function out of it, to use in a Fourier transform?

I assume that when a processor does an FFT on a list of numbers (from the time domain) it generates another list of numbers (that are in the frequency domain)?

FFT typically is used to generate the modulation signal for the data input but the RF carrier modulation/demodulation in most modern digital systems use IQ in the transmit and receive paths in Software Define Radios (SDR).
 
  • #15
Wifi uses Orthogonal frequency-division multiplexing". Many sub-channels (802.11 wifi uses 52), FFT is used to filter them.
OFDM_Transmitter_Receiver.jpg
 
  • #16
Thanks for the great replies, I've only had a quick look atm, but hopefully I'll get to have a more in-depth look over the coming days!
 

1. What is the most common method used by receivers and transmitters to approximate a signal?

The most common method used is called modulation, which involves manipulating a carrier signal to encode information. This allows the signal to be transmitted over a medium, such as wires or air, and then demodulated by the receiver to retrieve the original information.

2. How does modulation work?

Modulation works by altering one or more of the carrier signal's characteristics, such as amplitude, frequency, or phase, to represent the information being transmitted. This altered signal, known as the modulated signal, can then be transmitted and later demodulated to retrieve the original information.

3. Are there different types of modulation?

Yes, there are several types of modulation used in different communication systems. Some common types include amplitude modulation (AM), frequency modulation (FM), and phase modulation (PM).

4. Can a receiver or transmitter use more than one type of modulation?

Yes, it is possible for a receiver or transmitter to use multiple types of modulation. This is known as multi-mode or multi-band operation and is commonly used in modern communication systems to improve efficiency and signal quality.

5. How does the choice of modulation affect the quality of a signal?

The choice of modulation can greatly impact the quality of a signal. Different types of modulation have varying levels of noise immunity, bandwidth efficiency, and data transfer rates. Choosing the right modulation for a specific application is crucial for achieving optimal signal quality.

Similar threads

  • Electrical Engineering
Replies
4
Views
780
  • Classical Physics
2
Replies
47
Views
2K
Replies
1
Views
822
  • Electrical Engineering
Replies
3
Views
1K
  • Electrical Engineering
Replies
10
Views
2K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
8
Views
1K
  • Engineering and Comp Sci Homework Help
Replies
2
Views
1K
  • Linear and Abstract Algebra
2
Replies
43
Views
5K
Replies
13
Views
2K
  • Calculus and Beyond Homework Help
Replies
3
Views
909
Back
Top