How does raising a variable to a power affect the error in measurement?

  • Thread starter Thread starter kwah
  • Start date Start date
  • Tags Tags
    Error Reciprocal
AI Thread Summary
Raising a variable to a power affects the error in measurement through error propagation, which is not straightforward. The discussion highlights the challenge of calculating the error in t^-2 when the error in t is known, specifically a constant error of ±0.13s. The user attempts to derive the error using percentage error but finds the results counterintuitive and seeks a more reliable method. They explore the relationship between the variable and its powers, suggesting that the error scales with the power applied, though they are unsure of the exact calculations. Ultimately, the conversation emphasizes the need for a clear understanding of error propagation techniques in experimental measurements.
kwah
Messages
17
Reaction score
0
Hi,

For an experiment I have a value for the error in time t (s) to be +-0.13s but I'm having difficulty getting from this to an error in t^-2 for the error bars drawn onto a graph.

My values for t are in the range 36.84 - 24.88seconds.
The range for t^-2 is 0.0007368 - 0.0016155 s^-2.


My initial thought is that the error would simply follow the algebvraic manipulation of t, but that would give values such as 0.0008 +-59.172s which seems ridiculous.

Secondly I thought maybe work with the error % instead. The percentage error in t is approximately 0.42% (100 * 0.13/{mean of t values}).
(0.42%)^2 = 0.00176% and
(0.42%)^-2 ~5670000%.

This definitely leads me to believe that the error for a reciprocal should not also be a reciprocal but doesn't really get me much closer to getting the error values.



A rough attempt at sticking some nice numbers in doesn't help too much:

100 +-10
100^2 = 90^2 to 110^2
= 8100 to 12100
= 10100 +-2000

(same again for cubing it and base 10 but on paper).


From that I can somewhat see that the ^2 & +-2000 and ^3 & +-301000 might be related somehow but none of it seems immediately intuitive / obvious.

Searches for 'error squared' (and variations thereof) show many results for mean square error and root mean square and a few results I found here point to error propogation but I didn't find the brief look at error propogation accessible - maybe its simply lack of sleep but it just went in one eye and out the other.





So yeah, basically just a very long winded way of asking:
How should errors get manipulated? Specifically, how does an error in a variable get affected if the variable is raised to a power?

I don't mind reading up about it if you point me to somewhere that the answer is definitely at as I'm not normally one to ask for answers to be served to me but on a short deadline I'd like something pretty quick and accessible please :)



Thanks,
kwah



PS, apologies if this is in the wrong section but I think that the issue I'm having is simply just a simple math / manipulation issue :) .
 
Mathematics news on Phys.org
Let me see if I understand your first question:

If the error in measuring the variable t is \pm h about the actual value t_0 what is the error in measuring \frac{1}{t^2} ?

You want some bound other than
\frac{1}{(t_0 \pm h)^2} - \frac{1}{t_0^2} ?

You want a bound independent of t_0 ?
 
Stephen Tashi said:
Let me see if I understand your first question:

If the error in measuring the variable t is \pm h about the actual value t_0 what is the error in measuring \frac{1}{t^2} ?

Edit: Yes, I think so though I fear not a perfect yes.
To clarify, within an experiment I have a timer error in the the variable t of \pm 0.13s. I believe this to be constant for all values of t.

The value I need to plot on a graph is \frac{1}{t^2} but do not know how large the error bars should be.

I have approximated the error % by using 0.13s against the mean of all values of t and it appears to be a possbile method for this but I do not believe I do not need to use the error %.. I just want the (ablsoute?) error that I should use on my graph.
Stephen Tashi said:
You want some bound other than
\frac{1}{(t_0 \pm h)^2} - \frac{1}{t_0^2} ?

You want a bound independent of t_0 ?

Apologies, I'm not familiar with the terminology. That might be what I'm looking for but I really do not know ;)I guess what I'm trying to achieve is to start with variable t \pm h and find out what happens to the value of h if you were to raise t to an arbitrary power.After pushing forward with putting the numbers in and trying to spot a pattern it appears that I have found this which appears to be (nearly) true::

(t_n)^n \pm h_n
(t_n)^n \pm ( |n| \times \frac{h_1}{t_1} \times 100 ) \%Examples (please forgive the horrifically long lines..):
200^1 \pm 5
200^1 \pm (|1| \times 2.5) \% = 200^1 \pm 2.5 \% = 195 to 205 = 200 \pm 5
200^2 \pm (|2| \times 2.5) \% = 200^2 \pm 5 \% = 40000 \pm 1000 = 39000 to 41000 \approx 195^2 to 205^2 = 38025 to 42025 = 40025 \pm 5 \%
200^3 \pm (|3| \times 2.5) \% = 200^3 \pm 7.5 \% = 8000000 \pm 600000 = 7400000 to 8600000 \approx 195^3 to 205^3 = 7414875 to 8615125 = 8015000 \pm 7.5 \%

The same holds for the integer values of n<0 that I've plugged in but I can't be bothered to type it all out ;)
As I said, it appears to be nearly true but I do not understand why it is only an approximation or what the "correct" method should be... Maybe it helps to explain what I'm trying to find though?
 
Last edited:
Insights auto threads is broken atm, so I'm manually creating these for new Insight articles. In Dirac’s Principles of Quantum Mechanics published in 1930 he introduced a “convenient notation” he referred to as a “delta function” which he treated as a continuum analog to the discrete Kronecker delta. The Kronecker delta is simply the indexed components of the identity operator in matrix algebra Source: https://www.physicsforums.com/insights/what-exactly-is-diracs-delta-function/ by...
Fermat's Last Theorem has long been one of the most famous mathematical problems, and is now one of the most famous theorems. It simply states that the equation $$ a^n+b^n=c^n $$ has no solutions with positive integers if ##n>2.## It was named after Pierre de Fermat (1607-1665). The problem itself stems from the book Arithmetica by Diophantus of Alexandria. It gained popularity because Fermat noted in his copy "Cubum autem in duos cubos, aut quadratoquadratum in duos quadratoquadratos, et...
Thread 'Imaginary Pythagorus'
I posted this in the Lame Math thread, but it's got me thinking. Is there any validity to this? Or is it really just a mathematical trick? Naively, I see that i2 + plus 12 does equal zero2. But does this have a meaning? I know one can treat the imaginary number line as just another axis like the reals, but does that mean this does represent a triangle in the complex plane with a hypotenuse of length zero? Ibix offered a rendering of the diagram using what I assume is matrix* notation...
Back
Top