Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

RF fundamentals, how does bandwidth affect throughput

  1. Aug 24, 2012 #1
    I'm trying to get a basic understanding of RF DAC.

    If I have a DAC that does 100 MS/s with an 8 bit resolution, this translates to a 800 Mbps throughput. Or is this too simple?

    Now, how does the bandwith of a channel affect this? Say I have two 5 MHz channels. With two channels there should be 2x800 = 1600 Mbps. But does this change with two 10 MHz channels instead of two 5 MHz channels?
     
  2. jcsd
  3. Aug 24, 2012 #2
    If you have channels that is only 5MHz BW, You get about 5 mega bits a second, not over 10 mega bits per second. ( I am not sure how you count bits stream to frequency. Because I am not sure you can cound two bits per cycle as each cycle has +ve and -ve!!!). But anyway, you are not going to get over 10 mega bits per second no matter what. With two channels, you can get over twice of the maximum limit of each channel. You are talking about apple and orange with the DAC.
     
  4. Aug 24, 2012 #3
    Hmm, I don't really follow. Keeping the 100 MS/s and 8 bit res, lets say I have a channel from 110 MHz to 120 MHz (centered at 115 MHz). How does increasing the channel width affect the throughput, the resolution is still 8 bits...
     
  5. Aug 24, 2012 #4
    The bandwidth does not imply the bitrate or vice-versa... that also depends on the coding scheme which you have not specified.

    If you have a channel from 110 MHz to 120 MHz a 100 MS/s DAC may be overkill (assuming of course you have a TX mixer). You really only need a 25 - 40 MS/s DAC depending on your reconstruction filter.

    All things being equal, increasing the channel width will increase the throughput because it will enable more complex modulation. It isn't a simple question. You have to change how you're coding your bits to take advantage of the extra bandwidth. It doesn't happen automatically.
     
    Last edited: Aug 24, 2012
  6. Aug 24, 2012 #5
    I think you are mixing things up. You have an 8 bit DAC capable to run at 100MHz, so you are generating information of 8X100MHz worth of data. This means that you are capable to generate 100 mega data points per second, each data points with 8 bits of resolution. Nothing more.

    But then you talk about two of the data channels ( which I don't even know exactly what you mean), each of only 5MHz BW. With this two channel, you can only sent 2X5M bits ( or say 2X10MHz maximum) of data bits worth of information per second. This and the DAC are two different unrelated things.

    Is this something new or new terminology since the days when I designed data acquisition systems for LeCroy? What are you trying to do? Tell us more specific information first.
     
  7. Aug 25, 2012 #6
    Bandwidth has a number of definitions and we need to be a little careful how we use it. A signal has a bandwidth which is the difference between the highest and lowest sine waves that compose that signal. A channel has a channel width and in order for a signal to remain undistorted, the bandwidth of the signal must fit in the channel width.

    In practice, the signal must be filtered so it doesn't intrude on adjacent channels. The filters don't have vertical edges but are sloped, so the attenuation of the edges of the signal must begin inside the channel width. This means that the bandwidth must be somewhat narrower than the channel width.

    In order to find the channel width needed, you need to find the bandwidth of your signal. It is not as simple as saying it must be 800 MHz because you're sampling at 100 MS/sec at 8 bit resolution. Also there are many ways of compressing the data so you don't have to send so many bits. A simple one is if the data from one sample to the next varies by less than 8 bits, just transmit the difference between the samples.

    The bandwidth of a signal is affected by many factors so it is generally measured with a spectrum analyzer rather than calculated.
     
    Last edited: Aug 25, 2012
  8. Aug 25, 2012 #7

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    There's a basic fact that needs to be considered here. The form of modulation system that you use will determine the number of b/s for the given analogue (do you mean 3dB?) bandwidth and a given signal to noise ratio. Putting it crudely, if you have loads of signal to noise ratio, your analogue signal can carry any number of signal levels so you you are not restricted to binary signalling; four levels will double the bit rate. (in fact, WHO ever uses simple binary these days?). The choice should be between modulation and coding systems and not just one of them and, as mentioned earlier, the filtering can be important if other users are involved.
    Whilst it's true that a simple binary system is easiest to design, as soon as you want to squeeze a useful amount of data through any channel, it's worth while considering something more sophisticated. It's what Digital TV, Radio and mobile comms all do.
     
  9. Aug 25, 2012 #8
    What is a channel, this is new to me!!!:confused:
     
  10. Aug 25, 2012 #9

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    Indeed, what is a channel? If a channel is described in terms of Hz, then we are talking about its spectral occupancy (in some way). In conventional terms that means the difference between the frequencies where the mean power is, say half power. But you'd also need to give some idea of signal to interference /noise ratio before the simple 'bandwidth' figure would be useful.
    If a channel is described in terms of b/s then that's what it is. You don't need to specify any more about it until noise and interference are considered, so even in that case you would need to specify data rate and some measure of error rate /statistics blah blah.

    It's a shame, imo, that digital capacity is referred to as 'bandwidth' and not just 'data-rate' because that would put things better into perspective. I'd bet that the first time the term was mis-used it wasn't by an Engineer! Smacks of Sales-person to me.
    But I'm really only repeating what's been written earlier.
     
  11. Aug 25, 2012 #10
    I remember 10 years ago when I was working on SONET OC48 and OC192, we put a pseudo random signal through the transceiver link combo and displayed onto an ultra fast scope, you see the "eye" of the Eye Pattern open, you call it good!!! Never worry about the BW, it's the bit rate and how clear the eye is!!! When the "eye" is drooping, that's the end of the road.

    So physically, channel is one single connection? Like a serial data link?
     
  12. Aug 25, 2012 #11

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    You are making this all too simple, I'm afraid. In conditions where there is a significant (but usable) error rate the eye pattern may be difficult to assess. The eye pattern is all about bandwidth. Clearly, you need your transmit and receive filters to produce a good eye pattern in a simple slicing circuit but I think you'll find that modern systems have gone beyond that.
    Also, when you say "never mind about the B/W" you are forgetting the problem of outgoing interference and about being a good neighbour.

    If "channel is one single connection" then one should describe it in terms of the data (information) it carries - which is not 'bandwidth'.

    Btw, what was the modulation system you were using on your system and what coding? If you look for the equivalent to an eye pattern for PAM systems, it isn't necessarily so clear.
     
  13. Aug 25, 2012 #12
    I mainly deal with the very front end physical link, I don't even work with the modulation scheme. I was not even in the field long enough, I got in for a few months and got out. I found myself not interested in the telecom field. My interest is in RF, and I don't think I got a lot of it there even they are dealing with 2.5 to 10GHz at the time. All I see is buying the optical transceivers. Then the job is to interface from the transceiver. Laying out the differential pair is of utmost importance. I just talked to an engineer in Cisco, they are doing 15GHz and higher, even a via on the pcb is a big deal. They have to cut out part of the copper of the via feedthrough so they can have perfect impedance match when going through the via and checked by TDR. Jobs are very compartmentalized, each of us doing one particular thing. But again, I was barely scratching the surface of that field and maybe my comment are just because of my limited experience.

    Eye Pattern was the single most import test for us. All the TDR boiled down to get a good eye pattern. That will ring out impedance mismatch, bandwidth, rise time already. Get that done first and then go into more detail. But again, I was in it for only 6 months, so I am no expert on this.
     
    Last edited: Aug 25, 2012
  14. Aug 25, 2012 #13

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    OK - so you would find Bandwidth (transmitted and receiver) very relevant. Those two are measured in Hz - not bits per second. You would also know that the occupied bandwidth needed for a given demodulated signal to noise ratio will depend a lot on the form of modulation. For instance, wide band FM gives you a massive improvement in SNR but a low carrier to noise ratio threshold. It's horses for courses.
    I just want to avoid confusion. (And to inject some of my own haha)
     
  15. Aug 25, 2012 #14
    In a way, yes. We worry a lot about rise time which show in the eye. reflection, which is how clean is the eye. Rise time is directly relate to upper frequency limit.....BW. We just never talk about BW.....at least I wasn't!!! At very high bit rate, you don't get square eye.....you don't get square pulses. They look more sine than anything. So we look for the link being able to swing full amplitude ( eye open)............yes BW!!! I guess I just don't think of it that way!!!

    Ha Ha!!! This is really the first time I really relate rise time to BW in my mind. I designed all sort of sub nano second rise time HV pulsers, I really looking for rise time and settling time, never really worry about BW even though they directly relate. We/I just never look at it that way. You really don't, because you can have ringing even though you get the bandwidth and frequency response, and that will absolute screw everything up. So BW is really not the first thing come to mind. All I worry is whether I can get there on time ( rise time), then can I settle to the right voltage level on time ( settling time) so I can start acquiring data.
     
  16. Aug 25, 2012 #15
    Actually you caught me by surprise. I thought about it in the shower. There is a very good reason we don't talk about BW is serial data link. As I explained that it's the rise time and settling time that is of utmost importance. You can have a very high frequency link but if it is underdamp, you get ringing and that will be deadly in a communication link. These are digital pulses, not analog. High speed interface like LVDS or ECL are characterized by rise time and settling time. Impedance matching and disturbance in the middle of the link affect the settling time. For digital data, we only worry about how fast the data settle to the correct level for sampling instead of the frequency response. So by specifying the bit rate, it implies that the data can settle within the bit interval to be sampled.

    Even when I was working for LeCroy at the time, all we worry is how fast the output settle to the required accuracy, not the frequency response. DAC is kind of like a varying DC in the sense it can slew to a new level, settle to the required accuracy within the given specification. Analog bandwidth is secondary.

    Also, frequency response and slew rate is totally different thing as it's very clear in opamp spec. You can have an opamp that has good BW but slow slew rate, so that amp can only have good frequency performance in small signal. If signal amplitude goes up, you run into slew rate limit way before frequency limit. In this kind of digital link, it's the slew rate and rise time that dominates.

    Back to eye pattern, it can tell a lot just looking at it. If it is frequency limited,( more like slew rate limited) the eye closes in a smooth way like the sine wave getting smaller. But if you have reflection, you see kink in the eye, you tell right away you have impedance disruption. Ringing or reflection will also show up.

    In conclusion, rise time, settling time is related to BW in only a limited sense, it's the rise time and settling time that matter, not BW.
     
    Last edited: Aug 25, 2012
  17. Aug 26, 2012 #16
    I would say rise time and settling time are intimately related to BW, bUt you do need to be aware of slew rate.

    Were you working at interfacing LVDS ICs and optical transceivers at the system level? I say this because I used to work on front ends for 10Gb optical ethernet chips and let me tell you we were OBSESSED with bandwidth. The design was all about squeezing every drop of bandwidth we could from the process with techniques like Cherry-Hooper amplifiers in front and parallel ADCs following. I also have LVDS interfaces on most of the chips I work on and bandwidth is key, especially in making sure they are stable.

    My thinking is you are still concerned with bandwidth, but kind of by proxy.
     
  18. Aug 26, 2012 #17

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    Time and frequency domains are appropriate in different contexts. For the best channel performance you need to address both aspects. The effect of channel noise and interference are largely dependent on the receiver BW and the general effect of transmissions on the surroundings is conveniently characterised in transmitter bandwidth (which corresponds to channel bandwidth)
    The two BWs together will affect the pulse response which can be assessed by an eye pattern. But inter symbol interference (giving a dreadful eye pattern) can be dealt with by further baseband filtering (I.e. RF and baseband characteristics count). It's all this, plus the effects of the environment that give you your usable data rate.
     
  19. Aug 26, 2012 #18

    rbj

    User Avatar

    strictly speaking, it's not too simple. your bandwidth is 50 MHz and your signal-to-noise ratio is 28 to 1. but the ideal formula depicts the top theoretical limit to throughput.

    the theoretical limit to channel capacity (in bits per unit time) of some communications system is:

    [tex] C = B \ \log_2 \left( 1+\frac{S}{N} \right) [/tex]

    or if the signal-to-noise ratio is not constant with frequency

    [tex] C = \int_0^B \log_2 \left( 1+\frac{S(f)}{N(f)} \right) \ df [/tex]


    what this means for your DAC (followed by a matching ADC) is that the noise must be smaller than what would cause the number read by the ADC to be different that what was outputted by the DAC. if the noise exceeds that, you will have bit errors and your channel capacity is lower.
     
  20. Aug 26, 2012 #19

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    I have to pick you up on this statement (way back) that I just (re-)read. All pulses are analogue signals. They carry digital information. The shape of a pulse is a result of what the filtering has done to the initial digital values, which can be impulses or 'boxcar' (with near-integer but actually analogue values). The eye pattern is useful as it gives a view of how the symbols can interfere with each other, depending on the past and future values of the original samples. This is all analogue and, until your final decision circuit in the receiver, is a time varying, infinite-level signal. The decision circuit will produce a set of digital, quantised and re-timed values to which your receiver is 'committed' and which it processes digitally.
    The distinction between digital and analogue regimes is often very blurred in peoples' minds. One day, circuitry may operate without those distinctions (as happens in our brains) but these days there is a real difference in electronics thinking. Digital implies quantisation - binary or n-ary.
     
  21. Aug 26, 2012 #20

    sophiecentaur

    User Avatar
    Science Advisor
    Gold Member

    Have we been chasing the wrong hare, here?

    Are you talking in terms of generating (synthesising) signals to fit into a 5MHz channel here, using the above DAC? What I have been writing is correct (afaik) but is not too relevant if this is the context.
    If your DAC produces a set of samples at 100MS/s then all you need to do is to low pass filter in order to produce your wanted signal. There will, of course, be quantisation noise (error / distortion) and there are formulae which will tell you the equivalent level of white noise to this - given the number of levels and the sample rate. But I can't quite see how this interpretation actually ties in with the OP.
    Could you enlighten me, please? the thread has been interesting enough but it needs closure, I think.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: RF fundamentals, how does bandwidth affect throughput
Loading...