# Maximum Information transferrable in Radio Waves

I wondered if people could clear up some misconceptions I have about radio waves.

1) If I have a 800 MHz wave, does this mean 800 Mbits per second can be transferred in theory? Could I not change the amplitude of this wave every 0.00000000125 of a second, So when a receiver is reading it whether the amplitude is above a relative threshold would determine what bit it was? Why isn't this a feasible way to send information?

2) Let's build upon assumption number one. Does this mean I can have a wave that is 801 MHz and achieve the same result but with an extra 1 Mbit of information operating in the same space as the 800 MHz channel? If not why not?

3) Are there theoretically an infinite amount of frequencies? For example in assumption 2 I added 1 MHz, but what if I added just 1 extra Hz, what if I added 0.5 Hz, or an even smaller number, is there a point where I cannot distinguish between the 2 frequencies in the same space?

4) Can MIMO keep being made smaller to accommodate bandwidth on the same frequencies?

jedishrfu
Mentor
you need to read claude shannons work on information transfer theory:

http://en.wikipedia.org/wiki/Shannon–Hartley_theorem

Your current thinking is confusing mhz with mbits. With 800mhz you could probably transmit at most 400mbits per second and when you factor in signal noise then you need error correcting or error detection bits which drops your effective data rate down even more.

I'm no expert in this and maybe someone more knowledgeable can jump in here.

One can transmit at an arbitrary speed at 800 MHz, provided the signal to noise ratio is high enough for that. Usually, however, one is severely limited in the maximum level of signal (most powerful transmitters currently in use are in the megawatt range), and the noise is determined by temperature, so the signal to noise ratio is limited. Note that the signal to noise ratio is measured at the receiver, not the transmitter, where the received power is many orders of magnitude lower.

sophiecentaur
Gold Member
you need to read claude shannons work on information transfer theory:

http://en.wikipedia.org/wiki/Shannon–Hartley_theorem

Your current thinking is confusing mhz with mbits. With 800mhz you could probably transmit at most 400mbits per second and when you factor in signal noise then you need error correcting or error detection bits which drops your effective data rate down even more.

I'm no expert in this and maybe someone more knowledgeable can jump in here.

You are certainly right to point out that Hz and Bits/s are not the same thing.

Actually, that's quite the reverse. The fact is that an analogue signal, with very low noise introduced, has a vast amount of information content because the precision of that analogue signal is massive (infinite when there's no noise) and not just binary. Sending just one bit / s / Hz of bandwidth is hugely under-using the channel. Shannon defines information rate in terms of digital data and he basically says that you can divide your analogue signal range into as many distinguishable (quantised) steps as the introduced noise level will allow. If you have a 1000:1 ratio of signal to noise power (being very crude here so don't hold me to the exact numbers), you could send something approaching 10 binary channels, each one with a data rate defined by the 'bandwidth' of the channel.
Practical data signalling doesn't approach this, of course because of basic complexity of the coding needed and because the above performance would require an extremely long sample of received signal to be analysed in order to get all the data out, error free. Broadband signals can be carried on naff old low frequency telephony lines for this reason.

mfb
Mentor
Your change in the amplitude introduces additional frequencies in the signal - roughly in the range "the original frequency +- the frequency of amplitude changes". If you want to send separate signals at 800 MHz and 801 MHz, your data rate is limited to something of the order of 500kHz for each channel.

You are certainly right to point out that Hz and Bits/s are not the same thing.

Actually, that's quite the reverse. The fact is that an analogue signal, with very low noise introduced, has a vast amount of information content because the precision of that analogue signal is massive (infinite when there's no noise) and not just binary. Sending just one bit / s / Hz of bandwidth is hugely under-using the channel. Shannon defines information rate in terms of digital data and he basically says that you can divide your analogue signal range into as many distinguishable (quantised) steps as the introduced noise level will allow. If you have a 1000:1 ratio of signal to noise power (being very crude here so don't hold me to the exact numbers), you could send something approaching 10 binary channels, each one with a data rate defined by the 'bandwidth' of the channel.

So am I right to assume that radio waves can theoretically contain as much information as they can distinguish different steps? I'm sure there is an upper limit but that it's much higher than Hz/2? Wouldn't this conflict with Shannon?

Your change in the amplitude introduces additional frequencies in the signal - roughly in the range "the original frequency +- the frequency of amplitude changes". If you want to send separate signals at 800 MHz and 801 MHz, your data rate is limited to something of the order of 500kHz for each channel.

Why is this the case? I would've thought if you could distinguish 800 MHz from 801MHz you could make full use of both frequencies?

jedishrfu
Mentor
SophieCentaur is right. I forgot about stepping a waveform as a means to send data.

With respect to effective data rates, what I meant was with no noise (perfect world) you wouldn't need any error correction or detection but when its added then your effective data rate is less because a percentage of the transmission is error correction/detection bits and the rest is your actual data.

So for example transmitting a 100KB file might mean you need to send 110KB with the extra 10KB used for error detection and recovery. so your effective data rate is reduced by 10% or so.

mfb
Mentor
Well, how do you distinguish those signals, if you modify your 800MHz signal so quick that is a signal in the range of 790-810 MHz and your 801 MHz-signal is a signal in the range of 791-811 MHz?
The basic message is: An amplitude modification is also a frequency broadening.

Well, how do you distinguish those signals, if you modify your 800MHz signal so quick that is a signal in the range of 790-810 MHz and your 801 MHz-signal is a signal in the range of 791-811 MHz?
The basic message is: An amplitude modification is also a frequency broadening.

I'm a bit confused here, why would a modification of a 800MHz signals amplitude widen the frequency to 790-800MHz? And if this was the case, could I change the amplification by a smaller amount to not encroach on the other frequencies, and if so is there a limit to how slight the change has to be?

If you don't "encroach", then you carry ZERO information. If you change the wave in any way, you add to it a spectrum, which carries your information. The broader this spectrum, the more information it carries. The size of this spectrum is called the bandwidth.

sophiecentaur
Gold Member
So am I right to assume that radio waves can theoretically contain as much information as they can distinguish different steps? I'm sure there is an upper limit but that it's much higher than Hz/2? Wouldn't this conflict with Shannon?

Why is this the case? I would've thought if you could distinguish 800 MHz from 801MHz you could make full use of both frequencies?

No because there is always noise present in signals.
Analogue signalling can be thought of as very wasteful of bandwidth or power because it presents the source signal exactly as it was 'produced' and this is not necessary. The fact that sound and video signals can be digitally coded and then data-reduced without the impairments being detectable is proof of this. It is, of course, possible to crucify analogue programme material by doing this too savagely but the basic principle is there. The downside of digitising signals is that you build in an initial distortion / error when you sample and quantise them. However, up to a certain level of added channel noise, you can be sure that these are the only imperfections that you will get (after which, unlike good old AM, the signal dies completely)

Problem is that, although you can distinguish between these two frequencies, if you want to transmit actual information, you need to be switching each of them on and off (or some other form of modulation) and that introduces sidebands which take up more spectrum than just your original two frequencies. Frequency Shift Keying does this by sending one symbol as one frequency and the other symbol as the other frequency. It's very basic and can be demodulated with simple equipment. It is, however, inefficient in its use of spectrum.

mfb
Mentor
I'm a bit confused here, why would a modification of a 800MHz signals amplitude widen the frequency to 790-800MHz? And if this was the case, could I change the amplification by a smaller amount to not encroach on the other frequencies, and if so is there a limit to how slight the change has to be?
You can do a fourier transformation of your signal. There is only one solution with constant 800 MHz and nothing else: An 800MHz-wave with an amplitude that is constant in time. Everything else - switching on and off, or other amplitude modulations - will give additional frequencies.