What are Complex Signals?

Main Question or Discussion Point

I know what complex numbers are, but what actually is a complex signal as opposed to a real signal? I know a little Fourier Transform and in that context it is described as the difference between a cos wave and sin wave signals. But when I read about it in mobile communications it is described as an additional degree of freedom that can be transmitted in parallel. It can't be the Fourier Transform definition since a phase shift does not stop it from interfering with each other. In one text it gives an analogy of transmitting two signals through two wires and combining them at the end. But how can this be done through the air without the two signals interfering? This difference between a wave in the complex plane and the real plane is making me very confused. Please help.

Related Electrical Engineering News on Phys.org
dlgoff
Gold Member
Last edited by a moderator:
Uh, I don't think I'm talking about the frequency domain. I'm talking about the complex signal space as commonly represented by I+jQ. I wish to know the difference between I and Q and what they are physically and how they seem to be transmitted without interfering with each other when signals from different path lengths arrive at the receiver at the same time.

f95toli
Gold Member
I an Q are just the in-phase and quadrature phase of the signal.

I=R cos phi
Q=R sin phi

Where R is the amplitude and phi is the (relative) phase.
Or in other words, I and Q are the components of a vector with length sqrt(I^2+Q^2) that forms an angle atan(I/Q with respect to the reference phase.
Hence, you can either describe a signal using amplitude and phase OR I and Q; the latter is more convenient in many situations, especially since there are microwave components that actually use the I and Q signals as their input/output (IQ modulators/demodulators).

I already understood the last reply, but what I am puzzled is this: Assume the transmitter sends some I+jQ signal through the air. The signal is affected by heavy scattering as likely the case when in a place with a lot of obstacles like in a city. So as a result the receiver will receive at the same time many copies of the same signal but with random phase shifts since the signals will arrive through different paths. How will the receiver then differentiate between the I and Q?

Born2bwire
Gold Member
I already understood the last reply, but what I am puzzled is this: Assume the transmitter sends some I+jQ signal through the air. The signal is affected by heavy scattering as likely the case when in a place with a lot of obstacles like in a city. So as a result the receiver will receive at the same time many copies of the same signal but with random phase shifts since the signals will arrive through different paths. How will the receiver then differentiate between the I and Q?
Short answer: It doesn't if you are naive about it (ever seen ghosting using an analog over the air TV antenna?_.

Long Answer: Iiiiiiiiiittttttt doooooeeeeeessssn... Nah, just kidding. Using intelligent coding schemes will allow you to differentiate between the desired signal and ghost/noise signals in phase space. For example, a simple two bit quadrature scheme can be achieved by only allowing the phase to be 0, 90, 180, and 270 degrees with only one amplitude. This allows you to manipulate the phase of the signal to indicate one of four two-bit words. Noise will cause the recieved signals to migrate around the original phase point as small amounts of phase shifts and destructive/constructive inteference occurs. But the correct use of a learning filter will allow the receiver to recognize the behavior of the added noise and correct for this. This is done by preceeding a communication with a predetermined bit string that serves as the training code. This is like the handshake in your old phone modem. When the modem connects, it does the whole handshaking "hi, how are you" routine but the routine is always the same and so the receivers can monitor the received signal, compare against what it should be, and train the filters to account for the noise. Of course the filters keep adjusting for the noise as you receive more signals but they need a starting reference point to have an accurate estimate.

So all of this allows for greater noise suppression and large amounts of information to be encoded into a single signal. As the noise increases in a signal, then the encoding scheme becomes more robust. You'll notice this with your Wi-Fi. If you have a weaker signal, the bit rate drops as the router and your computer agree to switch to a more robust encoding scheme. Eventually it gets so bad that it goes to pot of course.

Last edited:
Ah, I'm starting to get it. Alright let me clarify some points. So in the four distinct phase example you just gave, the phase difference between the strongest signals and the mean phase should not vary by more than roughly 45 degrees. So a frequency must be chosen that is not too high (wavelength too small) for this to happen. If the phase difference is more than around 45 degrees then it is backed off to a slower encoding scheme (ie with two phases of 0 and 180 degrees). In short the phases at the receiver do not vary so much, so as to make it completely random. Signals sent with the same phase will be grouped together and on the whole distinctly phased at the receiver when compared to signals sent with a different phase. No matter which path a signal takes, its strongest replica signals will not vary in phase from the mean by more than roughly a quarter of a wavelength. Is this correct?

Last edited:
Born2bwire
Gold Member
Yes. But just to clarify there are two things working here. The first, as you mentioned, is the hope that the phase shifts are small enough that they are still close to the original phase when transmitted. In this manner all we would do is map the received signal back to the closest valid point in phase space.

The second is that we can use a filter that "learns" the noise to further aid in recovering a signal. For example, let's say we have a situation where we always add a constant 45 degrees of phase shift to all signals. In this case, the above closest match strategy would always fail. But, if we had a filter that could recognize this shift then we could perfectly reconstruct the original signal. So, what we do is we have a filter that has a feedback loop to adjust itself. We send a known signal that is the training signal to get an idea of how the noise is affecting the signal and initialize our filter off of that (the filter would notice the constant phase shift and adjust itself to remove it). Then we start sending data but the filter is still using feedback to continually self-correct itself. For example, let's say the simple constant 45 degree phase shift moves to a constant 180 degree phase shift because you moved the receiver to a new location. The continuous feedback would allow the filter to note the transition and adjust for it on the fly.

So even if the noise starts to shift points to result in incorrect decoding, a good filter can help remove these problems. Only when the filter and encoding scheme can no longer keep up with the noise problems do we then drop down to a more robust scheme. But yes, you essentially have the jist of it.

EDIT: One thing to note about your frequency statement. There are two parts to a signal, the actual information content is modulated into a high frequency carrier signal. For example, given only four points in phase space we only need to have a small amount of bandwidth to do this. So the actual signal is a low frequency but we modulate this into the desired carrier frequency. The frequency that we choose to contain the information can be low or high, it doesn't matter because we are only concerned with the phase space representation. The carrier frequency is usually chosen by whatever bands we are allowed to operate at. However, higher frequencies allow us to compact more information by having additional channels. We may only need 1 MHz bandwidth to encode a single channel of information. So, if we have a carrier operating at 100 MHz, then we may be only able to have 10 channels from 95-105 MHz because we now are using 1/10th of the band at 100 MHz. But at 1 GHz, 1/10th of the band would allow us to have 100 channels. If each channel is our two bit scheme above, we can have 10 times the bandwidth by going from 100 MHz to 1 GHz. The additional channels allow us to service multiple users, like in Wi-fi. This is why if you have a lot of users with heavy traffic on a single access point you will have lower bandwidth since you are allocated fewer channels. So there are a lot of factors that can go into choosing the carrier frequency and such. An engineer that actually designs or works with the standards of such devices would know more. But I just wanted to note that a lower frequency is generally associated with a lower bandwidth/information content.

Last edited: