Why is RMS used for averages when considering a sine wave?

  • Thread starter Thread starter iScience
  • Start date Start date
  • Tags Tags
    Rms
AI Thread Summary
The discussion centers on the use of RMS (Root Mean Square) values for averaging sine waves, highlighting that RMS provides a more useful measure in scientific contexts, particularly in alternating current applications. The calculated average of the absolute sine wave is 0.636, while the RMS value is 0.702, leading to questions about which is more accurate. RMS is favored because it aligns closely with the true mean of normally distributed data and simplifies analytical work due to its differentiable nature. Additionally, RMS values allow for straightforward calculations involving orthogonal functions and relate directly to average power in electrical systems. Ultimately, the choice of average depends on the specific data and intended application.
iScience
Messages
466
Reaction score
5
consider a sin wave

to find the average of this function over interval 2pi, why not just do...

$$\frac{1}{2\pi}\int |sinx|dx$$

this turns out to be..

$$\frac{4}{2\pi}= 0.63619$$


the whole purpose of squaring and then sqrt'ing at the end is to make sure there are no negative values. this would be fine if its value came out to be the same as the average of the absolute value (which is what i thought we were trying to find in the first place), but the value is different.

so.. i guess i have two questions:

1.) is 0.636 more correct to use as an avg than 0.702?

2.)why do we always use RMS value in science as opposed to the ACTUAL avg (of the absolute)?
 
Mathematics news on Phys.org
That is a perfectly valid "average" with slightly different properties than "rms". Which you choose to use depends upon the data and what you plan to do with the data. One can show, for example, that if your data is from a "normal distribution" the rms will, on average, lie closer to the true mean of the distribution than the absolute value mean. Also, since the absolute value function is not differentiable, it can be harder to work with, analytically, than rms.
 
iScience said:
the whole purpose of squaring and then sqrt'ing at the end is to make sure there are no negative values. this would be fine if its value came out to be the same as the average of the absolute value (which is what i thought we were trying to find in the first place), but the value is different.
One of the key uses of the RMS of a sinusoid is in alternating current. Suppose you have a light bulb and you want to make it shine equally brightly using direct current as opposed to a 120 volt RMS AC supply? The answer is 120 volts. The RMS voltage (or current) gives the equivalent DC voltage (or current).


so.. i guess i have two questions:

1.) is 0.636 more correct to use as an avg than 0.702?
No.

2.)why do we always use RMS value in science as opposed to the ACTUAL avg (of the absolute)?
What makes you think your average is the "ACTUAL" one?

There are many ways of computing a "norm" or average. Your's is but one, RMS is another. Yet another is the maximum absolute deviation. There are others as well. Which one is "right"? That's the wrong question. They all are, in their own way.
 
One reason that RMS is a natural measurement to use is that you can think of it as the extension to infinite dimensions of standard euclidean distance.

If we have some point, say, (3,4,5) in 3-dimensional euclidean space, then the norm (the distance from this point to the origin) is ##\sqrt{3^2 + 4^2 + 5^2}##. We can think of a function as a "point" in infinite-dimensional space, and its "distance" from the origin (the zero function) is ##\sqrt{\int |f(x)|^2 dx}##.

But even in euclidean space, there are many other norms we can use, for example ##(3^p + 4^p + 5^p)^{1/p}## where ##p## is any real number ##\geq 1##. The special case ##p=2## gives euclidean distance. Similarly, ##(\int |f(x)|^p dx)^{1/p}## is a perfectly valid norm to use for functions.

A couple of advantages of the ##p=2## case (RMS):

1. The pythagorean theorem: if ##f## and ##g## are orthogonal (meaning ##\int f(x)g(x) = 0## in the case of functions), then ##\int |f(x) + g(x)|^2 dx = \int |f(x)|^2 dx + \int |g(x)|^2 dx##. This makes it easy to calculate the RMS of the sum of certain kinds of functions, such as sinusoids or noise.

2. The Cauchy-Schwarz inequality: ##|\int f(x) g(x) dx| \leq \sqrt{\int |f(x)|^2 dx}\sqrt{\int |g(x)|^2 dx}##
 
Related to DH's post:

RMS voltage multiplied by (in phase) RMS current gives average power. Some references call this product RMS power, but this is not correct, it is average power, hence the equivalent brightness of a lightbulb.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Thread 'Imaginary Pythagorus'
I posted this in the Lame Math thread, but it's got me thinking. Is there any validity to this? Or is it really just a mathematical trick? Naively, I see that i2 + plus 12 does equal zero2. But does this have a meaning? I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero? Ibix offered a rendering of the diagram using what I assume is matrix* notation...
Back
Top