CDMA Receiver Capturing: How Does It Work?

  • Thread starter Thread starter paulfr
  • Start date Start date
  • Tags Tags
    Capture
Click For Summary
A CDMA receiver captures signals using unique digital codes that allow it to differentiate between multiple transmitters operating on the same frequency. Even when many signals are present at equal power levels, the receiver decodes the desired signal by tuning into its specific code, which appears as noise to others without the code. The receiver does not separate channels at the RF stage; instead, it processes all signals simultaneously until the baseband detection stage. This approach results in a lower signal-to-noise ratio (SNR) compared to traditional AM/FM radios, as the noise floor rises with more channels. The cellular system design mitigates this issue by ensuring strong signal strength through close proximity to transmitters and infrastructure support.
paulfr
Messages
193
Reaction score
3
A receiver that detects two transmitters will be "captured" by the transmitter with the strongest signal. So how does a CDMA rcvr operate when 100's of signals of the same strength, on the same band are trying to capture it ?

I understand the Code [in CDMA] allows it to know which signal source it is decoding, but how does it reject all the other signals ?

I am a bit embarrassed to ask this question because I worked on VSAT systems before but never got how this part works. I am primarily a digital guy, so the front ends are not my forte.

Thanks
 
Engineering news on Phys.org
Each transmitter is separated by a "code", hence the name http://en.wikipedia.org/wiki/Code_division_multiple_access#Steps_in_CDMA_Modulation" or Code Division Multiple Access. The codes are chosen to they the different signals from each transmitter will be mathematically unique and separable. The receiver is simply tuned to the code desired. The code looks random so if you don't have the code it looks like noise to the receiver circuits but if you know the code, the signal simply pops out of the "noise" because you "where the noise is going to be next".
 
Last edited by a moderator:
Thanks for the response.

I understand the digital code is what discriminates among the channels.
But all the transmitters [say 50K cell phones talking simultaneously to a cell tower]
all transmit at the same power level. So how does any receiver pick up
only one signal at a time on the front end ?
Stated a different way, how can a receiver pick only one of serveral transmitters
on the same frequency band ? If one is say 10db stronger, then it is clear.
It will capture the receiver. FM capture is 13db. But how can the receiver coherently
decode a signal if there are 2 or 10 or 50K present at the same time ?
 
It's different from, say, an AM or FM radio, which tunes the specific station in frequency. Instead a CDMA (or any other spread spectrum modulated signal) have a wide-band RF front-end because all the stations are occupying exactly the same width of spectrum. So all the stations are present in the RF and IF stages because no channel separation is performed until baseband: basically at the detector stage. In short: the radio does not separate any stations at the front-end at all.

Your intuition is correct: something "has to give" with this, and that is the noise figure of the RF section and thus the received SNR, apples-to-apples, isn't going to be as good as a radio that pre-selects a narrow channel in the RF stage.

But spread-spectrum is a wide band signal so you can't separate at RF anyway and all of the stations are occupying the same bandwidth/channel. Poorer SNR is the trade of going to digital modulation but especially spread-spectrum. As you add more channels the RF spectrum's noise floor rises and that does have an effect on the RF section.

The thing that is different though (for cellular) from AM/FM is the cell: the range between radios is small by design so the signal can be assured to be strong enough. Having a "problem" in this sense actually helps define the cell boundaries more strongly. This isn't an amateur radio "QRP" scenario. System assumes infrastructure of multiple cells and backchannel hand-off between cells always keeps RF strength high.

For other applications like WiFi, there is no expectation of wide coverage (kilometers) so signal strength is strong enough and is what it is at the fringes, only 10s of meters from the base station.

In phone line modems you have a similar situation but also a constrained physical bandwidth.

Think of it this way: you can't get something for nothing, and to get the higher digital bandwidth than spectrum width would normally allow have to be paid for, and you pay with SNR. This is part of the "http://en.wikipedia.org/wiki/Cliff_effect" " effect (combined with the threshold effect of digital data synchronization and error correction).
 
Last edited by a moderator:
I am trying to understand how transferring electric from the powerplant to my house is more effective using high voltage. The suggested explanation that the current is equal to the power supply divided by the voltage, and hence higher voltage leads to lower current and as a result to a lower power loss on the conductives is very confusing me. I know that the current is determined by the voltage and the resistance, and not by a power capability - which defines a limit to the allowable...

Similar threads

  • · Replies 5 ·
Replies
5
Views
10K
  • · Replies 2 ·
Replies
2
Views
3K
Replies
20
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
Replies
26
Views
4K
  • · Replies 35 ·
2
Replies
35
Views
6K
Replies
1
Views
1K
Replies
11
Views
3K
  • · Replies 15 ·
Replies
15
Views
4K
Replies
8
Views
2K