Question about rms value of a sine wave

AI Thread Summary
The effective value of a sine wave is 0.707 due to its relationship with heating effects in resistive loads, where it produces the same power as a DC voltage of 0.707 times the amplitude. This value is derived from the root mean square (RMS) calculation, which involves squaring the sine wave before integrating, reflecting its energy content. In contrast, the value of 0.637 is relevant for average measurements, such as in electromagnets, where it corresponds to the average field strength produced by a rectified sine wave. The RMS value is preferred for its mathematical properties, particularly in energy calculations and its preservation under Fourier transforms. Understanding these distinctions is essential for applications in electrical engineering and physics.
qwas
Messages
2
Reaction score
0
Sorry if this sounds like a dumb question, but why is the effective value of a sine wave 0.707, as opposed to 0.637 which is the value generated by finding the definite integral over the domain [0,∏] divided length of the domain?
 
Mathematics news on Phys.org
RMS means "root mean SQUARED", so we have to square the sine wave before integrating. Regarding why we use this measurement, it is essentially a generalization of Euclidean distance: ##\sqrt{\int |x(t)|^2 dt}## is a limiting form of ##\sqrt{|x(t_1)|^2 + |x(t_2)|^2 + \ldots + |x(t_n)|^2}##, which is the distance between the point ##(x(t_1), x(t_2), \ldots, x(t_n))## and the origin. There are many other reasons to prefer the RMS as well: it plays nicely with how we measure the energy in a random quantity (variable or process), namely the standard deviation. Also, the RMS of a function/signal is preserved when we transform to the frequency domain via the Fourier transform.

Mathematically, "RMS" is also a common way to measure the norm ("size") of a function: we call it the ##L^2## norm. Working in the ##L^2## space is very nice because it is a Hilbert space, unlike the other ##L^p## spaces, and because the Fourier transform is an isometry on the ##L^2## space. Don't worry if these terms are unfamiliar - you may see them eventually if you study advanced mathematics or physics, but otherwise you can probably live a perfectly happy life if you never hear about them again. :smile:
 
Last edited:
By the way, the calculation you performed is also a common way of measuring the size of a function/signal. In mathematics we call it the ##L^1## norm: ##\int |x(t)| dt##. It is a limiting form of ##|x(t_1)| + |x(t_2)| + \ldots + |x(t_n)|##, which is another way of measuring the distance between a point and the origin, assuming you are constrained to travel along an orthogonal "grid" to get to the point.
 
Alright, thanks for the help!
 
qwas said:
Sorry if this sounds like a dumb question, but why is the effective value of a sine wave 0.707, as opposed to 0.637 which is the value generated by finding the definite integral over the domain [0,∏] divided length of the domain?
The 0.707 figure is relevant where we are concerned with heating, or the heat produced by that waveform. So a sinewave of amplitude Av produces heat in a resistance equivalent to that produced by DC of amplitude 0.707Av, since instantaneous power = i2(t).R

The 0.637 figure also has its uses, but to situations where we are concerned with average. For example, an electromagnet is roughly linear, so if you applied a rectified sinewave of amplitude Av to the windings of an electromagnet, the field strength produced will have an average value equal to that produced by applying DC of magnitude 0.637Av to the windings.
 
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Thread 'Imaginary Pythagorus'
I posted this in the Lame Math thread, but it's got me thinking. Is there any validity to this? Or is it really just a mathematical trick? Naively, I see that i2 + plus 12 does equal zero2. But does this have a meaning? I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero? Ibix offered a rendering of the diagram using what I assume is matrix* notation...
Back
Top