# Compressing a transmitted signal

1. Sep 7, 2010

### granpa

would it be possible in theory to take an audio signal, do an fourier transform on it, divide all the frequencies by 10, convert it back to the time domain, transmit the resulting signal over 1/10th the bandwidth you would have originally needed, then reverse the process on the other end?

There would be some loss of signal quality and I dont suppose it could be done in real time but wouldnt it at least be possible with something like radio?

2. Sep 7, 2010

### waht

By performing a Fourier transform on your audio signal, you would first need to digitize the stream because there is no simple analog way of doing it. And so, one of Shannon's theorem applies which gives you an upper limit on the amount of compression you can do. The upper limit also depends on the bandwidth of the system, and the SNR (signal-to-noise ratio)

But with a simple audio signal, whose bandwidth is much smaller than of a simple RF link - is far from the upper limit and so you have room do all kinds of weird compressions on it.

3. Sep 7, 2010

### granpa

I expected that there would be loss of signal quality
but I wonder if it would be noticeable

4. Sep 7, 2010

### waht

I just read your post again, and if you do as you say, divide by 10 in frequency domain and then take an inverse Fourier of it to reverse the process?

If you do that then your audio will also be divided by 10 in time domain.

5. Sep 8, 2010

### granpa

if anything you would expect the time domain signal to be stretched tenfold but
that would be dividing the frequencies by 10 while stretching the time 10fold.

What I am suggesting is that you divide the frequencies by 10 while leaving the time alone.

6. Sep 8, 2010

### skeptic2

To make this a little clearer let's use the example of FSK. Let's say you are transmitting data using FSK. What you are suggesting really is no different than using two FSK frequencies at 0.1 times the frequencies originally used. Why can't FSK frequencies be spaced closer together in order to save bandwidth?

In order to gain benefit from frequencies spaced closer together the receiver would need much narrower filters. During the transition from one frequency to another additional sidebands are produced. If a narrower receive filter is used, some of those sidebands will be eliminated making it take longer for the waveform to transition to the other frequency. Because it now takes longer to transition to the other frequency, it is at each frequency for a smaller proportion of the time and taking more time during the transition.

In other words if you input a square wave into an FSK link, as the frequencies are moved closer together and the filter is narrowed, you would see the received waveform approach a sine wave and then diminish in amplitude. You would not be able to retrieve your square wave by doing an IFT.

So the separation of the FSK frequencies and the filter bandwidth are determined primarily by the maximum data rate.

7. Sep 8, 2010

### granpa

well I was thinking mainly about transmitting voice (and maybe even music) and I fully expected to lose some information but I thought that maybe it wouldnt be noticeable.

instead of losing higher or lower frequencies you would just lose the ability to distinguish between frequencies that are close together.

8. Sep 8, 2010

### skeptic2

I think you would be able to distinguish between two constant frequencies that are close together. What you would lose is the transition between frequencies and changes in amplitude. I imagine speech would sound muddled.