Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

What method does a reciever or transmitter use to approx...

  1. Jul 31, 2015 #1
    Hi,
    I'm just curious because I know wifi uses digital FFT to send and recieve signals. (I can't really remember why)
    But when I imagine a signal being sent its like a squiggily wave, so what method does the reciever use to approximate the instantanious values of the signal into a mathematical formula, or doesn't it bother?

    What is an example method if you had a signal coming in, that you could use to approximate a mathematical f(x) function out of it, to use in a fourier transform?

    I assume that when a processor does an FFT on a list of numbers (from the time domain) it generates another list of numbers (that are in the frequency domain)?
     
  2. jcsd
  3. Jul 31, 2015 #2
    With gross simplification, there are two main processes involved in sending a signal, such as video, by digital transmission. Firstly, the analogue signal is sampled at least twice for each of its cycles, and the value of each sample is turned into bits representing a binary number. This used to be called Pulse Code Modulation. Secondly, the bits are used to modulate a radio transmitter. The receiver then demodulates the incoming signal, so it has the original bit stream, and re-builds a copy of the original analogue signal one sample at a time. All this can be done without FFT, but modern receivers use FFT for things like filtering and demodulation because it works well and means less hardware and more software, which is today's preference. (In previous times we similarly preferred horses or steam for everything). In practice, it is possible to greatly reduce the number of bits to be sent by using compression algorithms - anything repetitive or which the viewer may not notice can be deleted.
     
  4. Jul 31, 2015 #3
    Ah, yeah so I suppose filtering using software instead of components would be the big advantage.

    One reason I ask is because when I learnt about Fourier transforms a few years ago I generally had a mathematical function f(x) to transform. But what you're discribing, it'd be the transform of a bunch of individual (discrete I suppose) samples, rather than a function, wouldn't it?
     
  5. Aug 1, 2015 #4

    meBigGuy

    User Avatar
    Gold Member

    Right. You just realized the difference between a continuous Fourier transform and a discrete Fourier transform (DFT) as implemented by the FFT.
    https://en.wikipedia.org/wiki/Discrete_Fourier_transform

    The sample points actually represent the input function for a discrete Fourier transform (its input sequence). Digital signal processing is based on sampling theory where a signal is represented by a series of discrete impulses, called samples, rather than by a continuous function. Again. the sequence of samples IS the input function.

    WIFI demodultaion uses the FFT to implement many filters to detect the 64 carriers in its OFDM-64 channels. Essentially it is transmitting on 64 frequencies at the same time. The FFT is the fastest way to detect carrier amplitude and phase on all 64 channels.
     
  6. Aug 1, 2015 #5
    That's a bit over my head, how it can tell the amplitude and phase of each channel sounds like magic to me :-|

    With the DFT, It's how the samples relate to eachother to make data that I can't really comprehend, you've got a list of sample amplitudes xn, that make a list of values xk, is there anything actually linking them together as a series or part of a waveform, other than the sample number? My issue is 'they just seem like numbers', maybe I'm forgetting the implications of frequency domain.
     
  7. Aug 1, 2015 #6

    meBigGuy

    User Avatar
    Gold Member

    First we will talk about time domain representations of signals by a sequence of numbers.

    The first rule of sampling theory is the nyquist theorem. Any sequence of numbers will perfectly represent a continuous waveform (perfectly) if you assume no frequencies higher than half the sample frequency. Another way to say that is that you must provide at least 2 samples per period of a wave form to reproduce it exactly. One half the sample frequency is called the nyquist frequency. Sampling systems must filter their inputs such that there are no frequencies above the nyquist frequency. That filter is called an anti-alias filter.

    So what this means is that a sequence of sampled numbers can ALWAYS be converted to the original continuous waveform if the nyquist theorem was obeyed.
    So, think of a sine wave. Sample it at 10 equally spaced points within the period. Those points looks like the original sine wave, and there is no way any signal other than the original sine wave could produce those points if there are no frequencies higher than 1/2 that sample frequency (any "distortion" between points will result in higher frequencies, which are not allowed). So those samples uniquely represent the sine wave.

    https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem

    So, now I have a sequence that represents a single frequency. The DFT can operate on that and give me the frequency domain representation of that sinewave (its ampliture and phase)

    If I have a 1000 point signal, roughly said, the DFT can turn it into a 1000 point spectrum with amplitude and phase of 1000 frequencies. I can grab groups of points and determine the energy in that band of frequencies.
     
    Last edited: Aug 1, 2015
  8. Aug 9, 2015 #7
    Great reply, sorry I've been meaning for so long to get back to this.

    So obeying the nyquist theorem eliminates misconstruding noise on a signal for other frequencies?
    So even digital packets uses carrier waves?
    Say I want to get data on a 100kHz wave and/or 50kHz, I sample it at 200kHz. Are you saying if there is noise on it, that looks like higher frequencies and we say 'no, that's not allowed so it must be noise'?
    Like this:
    pfnow.png

    I'm still a bit hazy on this. Say I'm sampling away at some high frequency, I have a set of evenly spaced samples in time, and I can see by the amplitudes of each sample that the points look like a signal at half, or maybe a quater of the frequency of my sampling rate. I can see that graphically as a human, but how does a computer say 'oh yeah that's a whatever frequency wave'? (I know the phase is intrinsically revealed)

    Could you manybe maybe explain with a picture? Do you mean like 1000 data samples? Are you saying you're like putting it into frequency domain, to look at how much of the various frequencies are present?

    Thanks!
     
  9. Aug 9, 2015 #8

    meBigGuy

    User Avatar
    Gold Member

    Well, I gave a simplified example. Technically the points represent the lowest possible frequency that has all the points, and its harmonics (which also have all the points). But, just assume for now that the higher frequencies are filtered out.

    If I have a frequency domain representation if a signal, and a frequency domain representation of a filter, if I multiply them point by point in the frequency domain I have essentially applied the filter. Say the filter is value 1 from 0 to 1KHz and value 0 after that. That's a "brick-wall" filter that just zeros out all the energy above 1KHz. You can then do the inverse fft of the result to get the time domain representation of the filtered signal.

    It really gets to be fun when you realize that multiplication in the frequency domain is convolution in the time domain. I'll let you look that one up.

    A lot of the stuff I've explained is a simplistic view. To really understand this accurately you need to study Digital Signal Processing. That is the mathematics of sampled data systems. What is a sampled data system, what properties do those data sequences have, and when I manipulate them, what is really happening? What is sampling noise, and how do I control that? What is imaging and aliasing? What is interpolation and decimation, and how is it used. (actually, read the course description for any DSP class)

    In the continuous domain you can represent systems with laplace transforms (s domain). In discrete systems one uses the Z-transform.
    [/PLAIN] [Broken]
    https://en.wikipedia.org/wiki/Digital_signal_processing

    There are online courses. http://ocw.mit.edu/resources/res-6-008-digital-signal-processing-spring-2011/
     
    Last edited by a moderator: May 7, 2017
  10. Aug 9, 2015 #9
    Right, so the point is that if I've got a signal sampled atleast at twice the frequency of the signal, even if parts of it are missing and there's noise, I can regreate the original fundemental frequency dispite this. But/So the processor recieving the signal won't think 'ahaa it's a function of this frequency', it'll just join up all the sampled points (including if noise happens to be sampled)? [there's some complex algorithm for determining what the samples mean?]
    So did digital packets of data still use a fundemental carrier wave, of which is sampled at atleast twice the frequency?


    Yeah ok, so I can plonk two signals ontop of eachother in time domain when I multiply them together in frequency domain. To filter, what have you
     
  11. Aug 9, 2015 #10

    meBigGuy

    User Avatar
    Gold Member

    Well, you need to be careful about what you are saying regarding noise. What you have sampled is what you have sampled. If you reconstruct it in the time domain by generating impulses and filtering at nyquist you will have the original signal, (a signal, which which if sampled at nyquist, would have created those samples).

    There is no cocept of "even if parts of it are missing and there is noise". If you take a 10X oversampled sequence for a sine wave and zero out samples in the middle, the reconstructed wave will have missing samples in the middle. But no components above nyquist assuming you filter the impulses at nyquist..

    If you take an analog signal with noise components above nyquist and sample it, you will get aliasing which essentially takes the "above-nyquist" noise and translates it to "below nyquist" noise.

    As for "fundamental carrier wave", that is a different subject. Now you are talking of modulation and encoding schemes that may or may not use a carrier at their baseband frequency. But generally one conceptually starts with a carrier frequency, even if it is DC. The resulting modulated signal could be SSB, DSB, or whatever after using QAM, ODFM or however the signal is produced. That baseband digital data signal is then modulated onto an RF carrier.

    Conceptually it can be as simple as Morse-Code-like 0's and 1's causing a 2.4Ghz carrier to switch on and off. If I mix to DC I can then sample the resulting pulses at a frequency of at least twice the morse code frequency.
     
  12. Aug 11, 2015 #11
    Ah,
    Orthogonal frequency-division multiplexing"
    interesting.

    Right, so any noise will show up in the signal. I'm still hung up on the point that 'you can recreate the signal perfectly' as long at it was sampled above the critical frequency. To recreate it I'm assuming you need to think about what it must have been, so would I be right in saying that the processor doesn't try to determine what frequency it was getting a signal on, it should already know that and be looking on that frequency, that it recreates data using paroty bits or some other method of varification, then if it is suspicious of the signal it recieved then it just asks for it again until it is happy nothing got lost on the way?

    Also, so how does it determine the frequency of what it is sampling? Is it some complicated algorythm? (a picture, maybe something like what I posted
    would be helpful if there is some graphically translatable explanation)

    Thanks
     
  13. Aug 11, 2015 #12
    Sampling at twice the frequency is a theoretical limit. Real systems over sample. This makes up for errors like not getting the voltage level to the nV or not having the timing perfect down to the fs.

    Carrier waves are another matter. Most electronic communication systems use them as a reference, but some don't. Still, I'm not sure they matter much to DSP.

    BTW, you cannot perfectly recreate the signal. All signals have noise and noise spreads all across the spectrum. So some of the noise will be above the sampling frequency and not be recreated. Further this noise will lead to the sample being off by some small amount which will distort the signal slightly. (Analog methods also add some amount of distortion, so it's not a big deal, just understand it's a question of quantity. Nothing is perfect.)

    Digital signals are easy; that's why we use them. Any distortion is likely small enough to not change the reading of a bit. Even on the rare occasions when it does, the message is easily checked and corrected by forward error correction (including redundant data) or reverse error correction (retransmitting the garbled message).

    An FFT is a method of running a Fourier Transform on a data set. It converts time domain data to frequency domain data (or the other way around). It has lots of limitations that need to be understood. The biggest of these is that it is not real time. You need to collect a sample for some length of time before you can run the transform. The transform then operates on that dataset. So if you collect for ten seconds, you are seeing the spectral content for the entire ten seconds. Even if you update the set every second (dropping the oldest second's data) you still see into the past, not the present. With some thought we can see how this relates to the Nyquist Rate and also the Heisenberg Uncertainty Principle.

    I am always awed by how well the standard model fits together across apparently unrelated fields of study.
     
  14. Aug 11, 2015 #13

    meBigGuy

    User Avatar
    Gold Member

    Generally it doesn't, unless it is specifically trying to determine the frequency. An FFT, or some other algorithm would do the job. Maybe average zero crossings, apply a filter, etc. Generally there is no single frequency, but, rather, bands of signals.

    Careful about what you say. Signals above the nyquist (1/2 the sample frequency) will be aliased back into the sub-nyquist band. Other than aliasing though, and within the limits of sampling noise, the input signal can be perfectly recreated from the samples. Nothing is is totally missed, in fact the energy of the signal + noise + out-of-band will be the same when it is signal + noise + aliased-out-of-band.

    Just to clarify, when I said "perfect" I was alluding to the fact that sampled sub-nyquist signals contain ALL the information needed to perfectly create the original signal. There is nothing inherently lost in the digitization and recreation other than the limits of the sampling noise based on the number of bits. Note that in a system with adequate dither you can actually extract a signal that was originally less than 1 bit.

    Now, no system is implemented perfectly, but do not confuse that with the theory of sampled data systems.

    https://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem
     
    Last edited: Aug 12, 2015
  15. Aug 12, 2015 #14

    nsaspook

    User Avatar
    Science Advisor

    FFT typically is used to generate the modulation signal for the data input but the RF carrier modulation/demodulation in most modern digital systems use IQ in the transmit and receive paths in Software Define Radios (SDR).
     
  16. Aug 12, 2015 #15

    meBigGuy

    User Avatar
    Gold Member

    Wifi uses Orthogonal frequency-division multiplexing". Many sub-channels (802.11 wifi uses 52), FFT is used to filter them.
    OFDM_Transmitter_Receiver.jpg
     
  17. Aug 12, 2015 #16
    Thanks for the great replies, I've only had a quick look atm, but hopefully I'll get to have a more in-depth look over the coming days!
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: What method does a reciever or transmitter use to approx...
Loading...