How to Create a Significant Figures Calculator in Python?

In summary, this simple Python class allows for recording and working with the mean and standard deviation for numerical calculations, and keeps track of them for rounding purposes. The goal of the project is to estimate a reasonable covariance based on knowledge of the correlation of signs of point errors around their variable's mean.
  • #1
andrewr
263
0
Hi,
I am toying around with writing a significant figures calculator written in Python.
According to NIST and other sites, the preferred way for working with uncertainty is to report the uncertain digits in concise form: eg 11234(13)kg would be 11,234kg+/-13kg with 13 being 1 standard deviation.

So I have built a real simple decimal class based object which can record and work with the mean and the standard deviation, and keep track of them for rounding purposes in simple numerical calculations.

P.S. This isn't for personal coursework -- but just experimentation to see how useful an idea would be. eg: so I don't need it to follow textbook rounding or anything...

For example, given two random variables a and b, obviously the equation (a+b)-a has a correlation problem since 'a' appears twice. Applying the Pythagorean theorem twice is going to produce a resulting standard deviation which is much too large.

I am after a very crude way of compensating for this type of an effect, where each variable keeps a history of some kind as to what the individual sample deviations were. So for example, given that 'a' represents the composite of 60 experiments (and so does b), I might record the "sign" of the sample deviation of all 60 experiments -- but only keep the numerical value of the mean and standard deviation for variables 'a' and 'b' (eg: means= a,b and std= sa, sb ).

Given that I know the sign, I figure I can use it test crudely for correlation (not quite Pearson, but cheaper computationally).

If I added 'a' to itself, then, the calculator would not know that they are the 'same' variable necessarily, but it would know that the sign correlation was identical; That seems like it might be useful in reducing the calculated deviation errors; but I get stuck when trying to figure out what the resulting standard deviation would be for adding variables together that I KNOW how many of the sample deviations share the same sign.

Has this approach been done before, and how is it attacked?
I have found this much information in the literature available on the web:

Error propagation for addition operation of random variables (I can figure out all others from this one example...)

Given two random variables:
[tex]a=\mu_{a}\pm\sigma_{a}[/tex]
[tex]b=\mu_{b}\pm\sigma_{b}[/tex]

The variance of a is simply
[tex]\sigma_{a}^{2}[/tex] and the variance of b is [tex]\sigma_{a}^{2}[/tex]

When no covariance or correlation is present:

[tex]a+b=\mu_{a}+\mu_{b}\pm(\sigma_{a}^{2}+\sigma_{b}^{2})^{0.5}[/tex]

co-variance is non-normalized correlation, it is computed from the original data point “error” as:

[tex]cov(a,b)=\sum_{i=1}^{n}\frac{\left(a-\mu_{a}\right)\left(b-\mu_{b}\right)}{n}[/tex]

Normalized covariance is called correlation:

[tex]cor(a,b)=\frac{cov(a,b)}{\sqrt{\sigma_{a}\sigma_{b}}}[/tex]

When covariance is present, the sum of variables becomes:

[tex]a+b=\mu_{a}+\mu_{b}\pm(\sigma_{a}^{2}+\sigma_{b}^{2}+2cov(a,b))^{0.5}[/tex]

When the covariance is 0, (independent variables possible), the formula reduces to the one for independent variables. The goal of my project, then, is to estimate a reasonable covariance based on knowledge of the *correlation* of signs of the point error around their variable's mean
[tex]corSgn(a,b)=\sum_{i=1}^{n}\frac{sgn\left(a-\mu_{a}\right)*sgn\left(b-\mu_{b}\right)}{n}[/tex]
and assuming a gaussian distribution of errors.

Clearly the sign correlation will optimistically estimate the variable's correlation, so that simply saying
[tex]cov(a,b)\approx corSgn(a,b)*\sqrt{\sigma_{a}*\sigma_{b}}[/tex]
wouldn't be an unreasonable guess; but I am unsure how far off this would be from the actual covariance of a data-set computed based on a Gaussian distribution about a mean.
How would I estimate how much the sign based correlation over-estimates the actual correlation of the data?
 
Last edited:
Technology news on Phys.org
  • #2
I'm beginning to think there was probably a better place to post this question, in the statistics section which I didn't notice earlier. Who do I contact about possibly getting it moved?

I have done more research on the web -- and counting sign correlation is sometimes used to measure skewness of a data sample; a Czech author Cyhelský pioneered the method -- but no information appears on the web when searching for that author regarding actual usages of the method.
Does anyone know of references/titles to this author's work?

In any event...

I ran some numerical experiments on two 10000 element random data vectors (Gaussian) to see what happens. I first added the two vectors together to check the result for uncorrelated data, and then sorted the vectors not according to magnitude, but only according to sign; That preserves the Gaussian nature of the distribution but allows sign correlation.

In theory, two uncorrelated random vectors with std~=1 ought to add as √a**2+b**2; so the result is √2.l
Doing three runs, I get:
#1
Std a,b= 0.999984524822 1.00001480622
signCor()= 0.0002 stdc= 1.41571605043
signCor()= 0.9882 stdc= 1.80636872562

#2
Std a,b= 0.999940516957 1.00000922566
signCor()= 0.0054 stdc= 1.42050509084
signCor()= 0.9994 stdc= 1.80941029406

#3
Std a,b= 0.999957285915 1.0000793552
signCor()= 0.0088 stdc= 1.42004205657
signCor()= 0.9996 stdc= 1.80996360335

Which is in reasonable agreement with theory. √2 ~= 1.414

Checking the sign correlated addition version arrives at ~1.810, so that's the answer for 1std deviation.
but It isn't obvious what relationship/equation would give such a value.
If I double (or scale) the standard deviation of both vectors/data sets, the standard deviation of the result linearly scales as well; which means sign correlation is as well behaved as the uncorrelated formula.

If anyone has an analytical formula, I would appreciate it...

Here is another test point, for anyone who wants to check their own analytical formula against another known answer... Thanks!

Standard deviations set to 1, and 1.25 to produce:
Std a,b= 0.999941614027 1.24999175125
signCor()= -0.0062 stdc= 1.59627866073
signCor()= 0.9994 stdc= 2.03581511276
 
  • #3
I am getting good results, now, regarding how to use the correlation.
In the original formulas, I made a mistake -- correlation is not the "root" of the std deviations, but just the root of the product of the variances, or just the product of the std deviations.

I'm surprised nobody noticed...

cor(a,b)=cov(a,b)/σaσb
cov(a,b)=cor(a,b)*σaσb
I am using the sample deviation to estimate the actual standard deviation of a sample, and after many trials I come up empirically with a function based on this idea:

def AddStd( a, b, sa, sb, sgnCor ):
"""
Compute an additive standard deviation based on means a,b and
std deviations sa, sb, and sign correlation sgnCor [-1 , 1]
Where no correlation between signs is=0, 100% anti-corrlation=-1.0 and 100% correlation=1.0
"""
co = 0.6375*sgnCor*sa*sb
su = (sa**2 + sb**2 + 2*co )**0.5 # unbiased addition values

return su

That python function definition is generally correct for addition. The covariance of normally distributed randomized data works out to an average value of ~0.6375; eg: that's the measured covariance of data where the signs are correlated, but the data magnitude is normally distributed at random.

I attempted to extend the results to multiplication, but found it more troublesome. I assumed the standard deviation of a product, could be found from the original data in the following manner given two variable's means a and b and considering their std deviations to be a random variable σa, σb:

( a+σa )( b+σb )= ab + aσb + bσa + σa*σb

The result, then, is just "a*b" with a deviation found from:
aσb + bσa + σa*σb

Since a and b are exact numbers (means), they merely scale the std deviations -- and the results add using the above python function for addition; however, the final term (σa*σb) can't be deduced from the addition formula I already discovered.

By inspection, I have found an empirical estimate: σa*σb ~= exp( -0.25 * sgnCor**2 )*σa*σb
to work very well for a=0, b=0, and std deviations set to any value. But, if I try to use the full formula I came up with, above, the result does not come out numerically correct when using test data.

I don't know why.

I did some research into error propagation to determine the correct formulas for multiplication of numbers with uncertainty, but discovered that easily available references do not deal with co-variance of the data -- but only supplies formulas for totally uncorrelated data.
I can come up with a formula that is typically accurate to 1 std deviation when using numerical test data; but I would still appreciate any pointers someone might have to an analytical solution for this type of problem.

--Thanks!
 
  • #4


Here's my Python Code for doing numerical tests on the Cyhelsky skewed data sets and to attempt to find the correct formula for error propagation given a normal distribution of data values, and a correlation of signs (like Cyhelsky skew) between two data vectors.

Note: Python is a free language generally installed on everything; Sometimes it isn't installed on MS Windows...

From any machine, simply get to the command prompt and then type: python [enter].
If The prompt changes to ">>>" you have Python installed..
Under windows you may need to "run" a program called "sys" or "command.com" in order to get a command prompt to be able to run python.

The following Python program is saved to "test.py" on my machine, and may be simply cut and pasted into a convenient text editor and saved as ascii text file. Then -- from a running python shell I just type things like:
>>>import test
>>>reload(test)
>>>test.run(6) # Do a six sample run and compile the results...

to use the program. (It's mostly comments, so scroll through it -- it's pretty easy to understand.)

Code:
"""
test module for doing numerical analysis of Cyhelsky skew / correlated error
propagation in addition and multiplication problems.
Written By: Andrew F. Robinson; 2011 -- dedicated to the public domain...
Requires standard python 2.xx and standard libraries, only.
"""
import random
import math
import operator as op

def mean(x): return reduce( op.add, x)/float(len(x))
def std(x,ddof=1):
	"""
	Compute a sample deviation / Bessel corrected by default.
	"""
	mu=mean(x)
	vs=reduce( op.add, map( lambda a: (a-mu)**2.0, x ) )
	vs/=float( len(x) - ddof )
	return vs**0.5

def Bias( n ):
	"""
	Return a bias factor for the std deviaion of a sub-sample.
	From:
	http://en.wikipedia.org/wiki/Unbiased_estimation_of_standard_deviation
	"""
	if (n<2): raise ValueError,"Bias must be for more than "+str(n)+"sample"
	if n<200: return (2./(n-1.))**0.5*math.gamma(n/2.)/math.gamma((n-1)/2.)
	return 1.0-1.0/(4*n)-7./(32*n**2)-19./(128*n**3)	

def Vector( n, mu, sigm ):
	"""
	Generate an idealized vector of random numbers, n long, with
	a mean value of "mu" and a standard deviation (of theoretical pop.) sigm

	This adjusts the deviation to reflect the ideal bias for the sample size
	and also the mean is adjusted to be (within roundoff error) exact.
	"""
	l=[]
	for i in xrange(n):
		l.append(random.gauss(mu=mu , sigma=sigm ))

	# Compute actual sample deviation of generated vector
	s=std(l)
	# Compute factor to normalize the sample deviation to the ideal bias...
	adjStd=(sigm/s) * Bias( n )
	l=map( lambda x: x*adjStd, l )

	# Adjust the mean to the exact value, but preserve the std. 
	adjMean=( mu - mean(l) )# Calculate a delta err
	l=map( lambda x: x+adjMean, l )

	return l

def SgnCor( l , m ):
	"""
	Compute the sign correlation beween Vectors with
	respect to their individual means.
	Zero deviation values from the respective means are randomly assigned
	based on index even/odd to minimize their statistical impact.
	(Virtually never happens, though...)
	"""
	c=0
	ml=mean(l)
	mm=mean(m)
	z=False
	for i in xrange(len(l)):
		if ( l[i]==ml or m[i]==mm ): 
			z=not z
			if not z: c+=1 # Count only half of the zero items...
			print "+"
		elif ((l[i] > ml) == (m[i] > mm)):
			c+= 1 # count if same sign (not zeros)
	r = len( l )
	return (2.*c)/r - 1.

def CyhelskySkew( l ):
	"""
	Compute the Cyhelsky skew coefficient for Vector "l"

	Number of items above median, vs. number below median
	expressed as a decimal percentage.
	Zero deviation values from the mean are randomly assigned
	based on index even/odd to minimize their statistical impact.
	(Virtually never happens, though...)
	"""
	c=0
	z=False
	ml=mean(l)
	for i in xrange(len(l)):
		if ( l[i] == ml ):
			z=not z
			if not z: c+=1 # Count only half of the zero items...
			print "+"
		elif l[i] > ml: c+= 1 # Count all positive signed items, no negative
	r = len(l)
	return (2.*c)/r - 1. 

def AddStd( a, b, sa, sb, sgnCor ):
	"""
	Compute an additive standard deviation based on means a,b and
	std deviations sa, sb, and sign correlation sgnCor
	"""
	co = 0.63735*sgnCor*sa*sb
	su = (sa**2 + sb**2 + 2.*co )**0.5 # unbiased addition values

	return su

def MulStd( a, b, sa, sb, sgnCor ):
	"""
	Compute an multiplicitive standard deviation based on means a,b and
	std's sa, sb, and sign correlation sgnCor.

	Totally uncorrelated produces the simple product of stds
        Correlation does affect the product of stds...

	Examples:
	10000 runs of 1000 len Vector:
	cor: avg  (std)
	 1: 0.7745 (.0338)
	-1: 0.7730 (.0342)
	 0: 0.9978 (.0317)

	"""
	l=1000 # assume vector of 1000 values used in the test.
	bias=Bias(l)
	sa/=bias 
	sb/=bias

	suc = l * abs(sgnCor)  # Nr perfectly uncorrelated samples (Wrong?)

	sampUC = sa*sb * Bias( suc ) # uncorrelated STD for sample
	sampC  = sa*sb * .777 * Bias( l-suc ) # Correlated STD

	print sgnCor,sa*sb*(1-abs(sgnCor)), sa*sb*sgnCor
	return sampUC * (1-abs(sgnCor)) + sampC * sgnCor

def AddStdVec( l, m ):
        """ Trivial element by element addition of a list (vector) """
	return map( lambda x,y : x+y, l, m )

def MulStdVec( l, m ):
        """ Trivial element by element multiplication of a  list (vector) """
	return map( lambda x,y : x*y, l, m )

############################################################################
ma,sa,mb,sb= 0. , 100. , 0. , 100. 
corr=0.99 # Roughly how much of the vector is to be correlated in second pass.

OpVec=MulStdVec  # Operation to test is multiplication
OpStd=MulStd     # This is the approximation formula

# Generate two random vectors, essentially uncorrelated
a=Vector( 1000, ma, sa )
sa=std( a )
b=Vector( 1000, mb, sb )
sb=std( b )

# Perform the operation on the uncorrelated vectors, print result
c=OpVec( a , b )
sc0=std( c )
cab0=SgnCor( a , b )
estS0=OpStd( ma, mb, sa, sb, cab0 )
print sc0,estS0

# Sort the vectors in order to produce correlated or anti-correlated results
bin=int( len(a)*abs(corr) + 0.99 ) 
a[0:bin]=sorted( a[0:bin], key=lambda e: (e-ma)>=0 )
b[0:bin]=sorted( b[0:bin], key=lambda e: ((e-mb)>0) == (corr>0) )

c=OpVec( a , b )
sc1=std( c )
cab1=SgnCor( a , b )
estS1=OpStd( ma, mb, sa, sb, cab1 )
print sc1,estS1

co = 0 # unused

############################################################################
avg0,avg1,var0,var1=0,0,0,0
import test
def run(n):
	"""
	Perform the basic test n times, and average the results for correlated
	and uncorrelated versions of the data-set.
	"""
	global avg0, avg1, var0, var1, cab0, cab1, est0, est1
	s0,s1,cor0,cor1,est0, est1, cov =[],[],[],[],[],[],[]
	for i in xrange(n):
  		reload( test )
  		s0.append( sc0 )
		s1.append( sc1 )
		cor0.append( cab0 )
		cor1.append( cab1 )
		est0.append( estS0 )
		est1.append( estS1 )
		cov.append( co )
		#print ("%.1f"%sc0),("%.1f"%estS0),("%+.2f"%cab0)
		#print ("%.1f"%sc1),("%.1f"%estS1),("%+.2f"%cab1)

	std0=std( s0 )/(n**0.5)
	avg0=mean( s0 )
	cor0=mean( cor0 )
	est0=mean( est0 )
	print ("cor0~=%+.3f"%cor0), "avgStd(c)0=",avg0, est0, " std(c)0~=",std0, (" er0=%.1f"%((est0-avg0)/std0))
	var0 = avg0**2

	std1=std( s1 )/(n**0.5)
	avg1=mean( s1 )
	cor1=mean( cor1 )
	est1=mean( est1 )
	print ("cor1~=%+.3f"%cor1), "avgStd(c)1=",avg1, est1, " std(c)1~=",std1, (" er1=%.1f"%((est1-avg1)/std1))
	var1 = avg1**2
	#print "cov~=",mean(cov)

The multiplication algorithm, MulStd(), is attempting to calculate the standard deviation of the multiplication operation on two random Vector() with normal (Gaussian) distribution of values. This is where I am having trouble figuring out an analytical formula...

When I multiply out 1000 element vectors, 10000 times; where the values of the vectors are both mean=0, std=1.0 (sample deviation, adjusted for bias), and adjust the number of sign (not magnitude) correlated data in each vector to a fixed percentage (see program for details) I get:

Sgncor: mu (avgStd (stdStd))
100%: 0(0.7745(.0338))
-100%: 0(0.7730 (.0342))
<0.1%: 0(0.9978 (.0317))

So, it looks like 0(1) * 0(1) ~= 0(0.9978) with a deviation in the deviation equivalent to roughly 1/(1000)**0.5 in my tests, etc. Clearly correlating the data DOES reduce the standard deviation of the result; but I am not sure how to calculate the intermediate values where the correlation is between 0 and +-100%

I was thinking to take the correlation value and compute a weighted std from the two cases; 100% correlated x nr of correlations, + 0% correlation x nr of not-correlated;
but I appear to have either made a mistake, or it just isn't as accurate as my approximation noted in a previous post.

Any thoughts why? (Anyone just want to chime in, yeah -- I don't know either?)
It's lonely...
 
Last edited:
  • #5
It's still lonely...

The Wikipedia website appears to have published an error in their equation for the unbiased estimation of the deviation of a Gaussian distribution sub-sample. In addition, the code I published, last post, uses the function math.gamma() which is not in all earlier releases of Python than 2.7.1; What follows fixes both problems, using a correct formula for unbiased estimation found at NIST.gov (USA) and with simpler math functions so that more commonly distributed versions of Python as of this time can run the sample code. My apologies for any inconveniences the extra cut and paste may require of you... :eek:

Hopefully I get this Sig fig calculator done soon!

Code:
def Bias( n ):
    """
    Return a bias factor (c4) for the std deviaion of a sub-sample.
    http://en.wikipedia.org/wiki/Unbiased_estimation_of_standard_deviation
    note: wikipedia Jan 2011, formula for c4 is innacurate, see instead:
    [PLAIN]http://www.iti.nist.gov/div898/handbook/pmc/section3/pmc32.htm[/PLAIN] 
    """
    n=int(n+0.5)
    if (n<2): raise ValueError,"Bias must be for more than "+str(n)+"sample"
    fTop   = n/2.0 -1.
    fBot   =(n-1.0)/2.0 -1.
    res = ( 2.0/(n-1.0) )**0.5
    for i in xrange( int(n/2)-1 ):
        res *= (fTop - i)/(fBot - i)
    return res * ( 0.5*math.pi**0.5 if n&1 else 1.0/math.pi**0.5 )
 
Last edited by a moderator:

What is a sig fig calculator in python?

A sig fig calculator in python is a program that can calculate the appropriate number of significant figures in a given number. It takes into account the precision and accuracy of the input values and outputs the result with the correct number of significant figures.

How does a sig fig calculator in python work?

A sig fig calculator in python works by first taking in the input value and determining its precision and accuracy. It then uses mathematical functions and rounding rules to calculate the appropriate number of significant figures in the result. This is done through the use of conditional statements and loops.

Why is it important to use a sig fig calculator in python?

Using a sig fig calculator in python ensures that our calculations are accurate and precise. It helps us avoid errors and provides a more reliable result. It is especially important in scientific and mathematical calculations where the number of significant figures is crucial for the validity of the data.

Can a sig fig calculator in python handle scientific notation?

Yes, a sig fig calculator in python can handle scientific notation. It can accurately calculate the number of significant figures in both standard and scientific notations. This makes it a useful tool for scientists and mathematicians who frequently work with large or small numbers.

Are there any limitations to using a sig fig calculator in python?

While a sig fig calculator in python is a helpful tool, it is not infallible. It can only provide accurate results based on the precision and accuracy of the input values. It is also important to note that the number of significant figures in a result may vary depending on the mathematical operation used. Therefore, it is essential to understand the rules and limitations of significant figures when using this calculator.

Similar threads

  • Programming and Computer Science
Replies
15
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
Replies
9
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
13
Views
1K
  • Other Physics Topics
Replies
1
Views
2K
  • Programming and Computer Science
Replies
4
Views
4K
  • Programming and Computer Science
Replies
4
Views
4K
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
756
Back
Top