Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

RSSI (Received Signal Strength Indication)

  1. Dec 26, 2008 #1
    I am trying to build a RSSI for a incoming signal I would like to have the system to be the highest accuracy available. But my problem is that I understand very little about what RSSI actually is.

    We are in the prototyping phase of the system so the frequency isn't set in stone yet. What is that we will be using a BPSK digital signal. The proposed frequencies are between 900MHz - 1.5GHz (preferred 1.5GHz).

    I'm not asking to have someone design this for me; I will do that. I just am having a heck of a time finding some examples to get a feel for the theory and the design. I would really appreciate any help.

    Thank you very much,
    Ryan Lovelett

    Already found:
    Last edited: Dec 26, 2008
  2. jcsd
  3. Dec 26, 2008 #2
    I thought it was a bit crude. You need calibration with a spectrum analyser. Switched attenuator can be useful.
  4. Dec 26, 2008 #3
    Typically the ambient noise floor is -110dBm, in all practical applications. What you get at the output of the receiver (at the IF tap point) is the total raw power (noise + signal).
    We take out the ambient noise from that and quantize the estimated received signal in a set of buckets - say 0 to 31 (as could be read off from an FPGA register by the firmware or software). 0 could mean the lower limit, say -108 dbm and 31 say the upper limit -60dBm, typical the WiFi max reception signal strength.

    Now, it is clumsy, sometimes, to deal with dBs or dBms in firmware or software. Hardware engineers love dBs and dBms, SW guys hate them....So we bucketize the power values, such that the buckets are "indices into a table of mapped power values".
    Hence the term "Received Signal Strength Index" of RSSI.

    What do you thing the cellphone does to show you 1,2,3,4,5 bars of signal?
    It could be just hashing on the RSSI table :-).

    hope that helps,
  5. Dec 26, 2008 #4
    So to test my understanding of the system I should do it like this:

    1) Capture the incoming signal.
    2) Turn it to an intermediate frequency (IF).
    3) Capture the power of that converted signal (in dB's and dBm's).
    4) Quantize (or digitize) that output.
    5) Deliver to FPGA.

    Is my understanding correct? The one part of the system that I may need help with as I've never done is part 3. How would you determine the power of a signal in dB's or dBm's?

  6. Dec 27, 2008 #5


    User Avatar
    Science Advisor
    Gold Member

    Power sensing in the MW regime is usually done using a calibrated diode operated in the square-law region.
    Also, there is no need to down-mix the signal if you are using this method; the dc-voltage across the diode is simply proportional to the incident power so all you need is an ADC.
  7. Dec 27, 2008 #6
    Actually, the 5 steps are not that so complicated.

    First, if the frequency is high, which in your case is (1.5GHz), you may need to down convert to IF and then use the power detector to first get an analog signal. Then you need an ADC to convert the analog to the digital (this will be in linear units) and run the digital samples via an FPGA or some other programmable logic that converts linear to log scale, that is dBs or dBms, and that output is available to you for use. Note that the RSSI table can be inside the FPGA or other programmable logic itself.
  8. Dec 27, 2008 #7
    Depends how accurate you want to be??

    You need to calibrate a system and as far I know the only way is with a bolometer. A switched attenuator can provide accurate steps down from a precise RF level.

    But the RF level itself is not of any great significance unless you know where the receiver noise floor lies.

    Simple way of testing the sensitivity on microwave links was to insert attenuation until the signal became unusable. A 30 dB 'fade margin' was considered to be pretty good.
  9. Dec 27, 2008 #8
    The ADC needs to be trained and the linear and dBM tables should be built in the firmware.
    Then the RSSI table is built. This whole process is "calibration". One of the things that is done these days, that I myself did, was to use a signal generator (Tectronix or Aligilent) that generates a modulated training signal of your choice - BPSK, or whatever SK, even canned CDMA and GSM signal patterns at the correct frequency and power levels. Then the generated signal is passed via a tunable attenuator (as you said) and the ADC response is captured at various points. Then use 3 or 4 point interpolation to get the ADC characteristics and then build your linear and dBM tables and program a PROM etc for lookup.....remember to remove the -110 dBM noise floor.
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?