- #1
mariano54
- 21
- 0
Hi, I've been trying to understand DSSS, but I'm not an engineer and have trouble with one point.
I get the fact that multiplying the user signal with the chip sequence is what spreads the signal. My question is, why is this?
If the data rate is, say, 1 bps, and the chip sequence 1000 bits long, then the final signal will have 1000 bps which requires 1000 Hz = 1 kHz.
From my perspective, this would mean that I have a signal that is oscillating at 1 kHz, but I know it's wrong and what it actually means is that the signal has a bandwidth of 1kHz (and is, therefore, spread). Why does increasing the frequency mean that the signal spans a whole range of frequencies instead of a single higher frequency?
I get the fact that multiplying the user signal with the chip sequence is what spreads the signal. My question is, why is this?
If the data rate is, say, 1 bps, and the chip sequence 1000 bits long, then the final signal will have 1000 bps which requires 1000 Hz = 1 kHz.
From my perspective, this would mean that I have a signal that is oscillating at 1 kHz, but I know it's wrong and what it actually means is that the signal has a bandwidth of 1kHz (and is, therefore, spread). Why does increasing the frequency mean that the signal spans a whole range of frequencies instead of a single higher frequency?