I've looked at this carefully but I am still unsure and can't seem to find any information of whether these formulas work for errors in the mean of quantities. For example, I'm not sure that
d<T^2>/<T^2> = 2d<T>/<T>
Well I'm using a molecular dynamics program to determine a set of T values over time, so I get some fluctuation about the average T and I've worked out the error in the average of T. The problem is that I now need the error in <dEk> which I think needs the error in T but I'm not completely...
[edit] adding: can't you use something like $${\sigma_{T^2}\over T^2 } = 2 {\sigma_T \over T} \ \ \ \rm ?$$
I know of this formula but I wasn't sure if it would work with the error being in the average of T...
Hi, does anyone know of an easy way to calculate the error in <x^2> from the error in <x>? I am running a molecular dynamics simulation and trying to work out the error in the fluctuation of kinetic energy <dEk> = <3/2NT^2> - <3/2NT>^2 from the error in <T>.
Thanks in advance
If a question asks for the direction of the maximum gradient of a scalar field, is it acceptable to just use del(x) as the answer or is the question asking for a unit vector?
Thanks