Error on the Mean: How to Compute w/ Individual Error

  • I
  • Thread starter kelly0303
  • Start date
  • Tags
    Error Mean
In summary, the error on the mean value of a physical quantity can be calculated using the standard propagation of errors formula, which takes into account the standard deviation of the individual measurements and the number of measurements. However, this formula does not consider the accuracy of the measurements. To reflect both precision and accuracy, one can also use the variance of the sample mean, which is equal to the sum of the variances of the individual measurements divided by the number of measurements. This approach takes into account the individual errors of the data points and can result in different values depending on the variance of the individual measurements.
  • #1
kelly0303
561
33
Hello! I have some measurements with errors associated with them: ##x_i \pm \delta x_i## and I want to cite the value of the mean with its error. I see online that the error on the mean is defined as ##\sigma/\sqrt N##, where ##\sigma## is the standard deviation of my measurements and ##N## is the number of measurements. However, this formula seems to ignore the error on the individual data points. How should I compute the error in this case? Thank you!
 
Physics news on Phys.org
  • #2
In your case, σ/√N equation is not very useful;
better method may be to define the y-axis values band which contains for example 95% of error bars. Mean value will be the the value of band center resulting in such band being the most narrow.
 
  • #3
Or you could use the standard propagation of errors formula.
 
  • Like
Likes Klystron and hutchphd
  • #4
Dale said:
Or you could use the standard propagation of errors formula.
I was thinking to do that, but that doesn't take into account the standard deviation of the data itself, just the errors on the individual points. Would that be correct?
 
  • #5
Dale said:
Or you could use the standard propagation of errors formula.
Just to make sure I am clear about my question, say I have 2 measurements for a physical quantity of ##10 \pm 3## and ##12 \pm 2##. The mean would be ##11##, what error should I put on this mean value?
 
  • #6
kelly0303 said:
what error should I put on this mean value?
Whatever the propagation of errors formula says.
 
  • #7
Dale said:
Whatever the propagation of errors formula says.
But I am still confused. If my results are ##10 \pm 3## and ##12 \pm 2## or ##1000 \pm 3## and ##1200 \pm 2## (say from 2 different experiments), the propagation of errors formula would give the same value in both cases. But obviously the 2 situation are widely different. The propagation of error gives a measurement of the precision of the measurement, but it completely ignores the accuracy (which would be the standard deviation of the 2 measurements?). My question is how can I reflect both these values in my final error on the mean.
 
  • #8
kelly0303 said:
But obviously the 2 situation are widely different.
The variance of the mean is not widely different.
 
  • #9
With no other information, you should consider each measurement as a random result from a random variable. Apply the statistical equations and theorems to the measured values.
 
  • #10
FactChecker said:
With no other information, you should consider each measurement as a random result from a random variable. Apply the statistical equations and theorems to the measured values.
So what would be the error in my initial example?
 
  • #11
Dale said:
The variance of the mean is not widely different.
I am not sure what you mean. In the first case the sigma is 100 so the error on the mean, given by ##\sigma/\sqrt N## is ##100/\sqrt 2 = 70.71## in the other case we have ##1/\sqrt 2 = 0.71##. The results are quite different. Am I miss understanding something here?
 
  • #12
I would calculate the sample mean and sample standard deviation of the measurements using the usual formulas.
 
  • #13
FactChecker said:
I would calculate the sample mean and sample standard deviation of the measurements using the usual formulas.
But wouldn't that completely ignore the error on the individual data points?
 
  • #14
kelly0303 said:
But wouldn't that completely ignore the error on the individual data points?

The variance of ##\sum_i x_i## is the sum of the variances of the individual ##x_i##, under the assumption the ##x_i## are independent. Your individual errors are presumably something like ##90\%## confidence limits, and so proportional to the standard deviation of each ##x_i##. Thus you can calculate the individual variances and they very much depend on the error of the individual data points.

The variance of the sample mean ##\bar x = (1/N) \sum_i x_i## is equal to ##(1/N)^2## times the variance of ##\sum_i x_i##.

Note that when the data points all have the same variance ##\sigma^2## this reduces to the standard result ##\sigma_{\bar x}^2 = (1/N^2) * N\sigma^2 = \sigma^2 / N## or ##\sigma_{\bar x} = \sigma/\sqrt N##
 
Last edited:
  • #15
RPinPA said:
The variance of ##\sum_i x_i## is the sum of the variances of the individual ##x_i##, under the assumption the ##x_i## are independent. Your individual errors are presumably something like ##90\%## confidence limits, and so proportional to the standard deviation of each ##x_i##. Thus you can calculate the individual variances and they very much depend on the error of the individual data points.

The variance of the sample mean ##\bar x = (1/N) \sum_i x_i## is equal to ##(1/N)^2## times the variance of ##\sum_i x_i##.

Note that when the data points all have the same variance ##\sigma^2## this reduces to the standard result ##\sigma_{\bar x}^2 = (1/N^2) * N\sigma^2 = \sigma^2 / N## or ##\sigma_{\bar x} = \sigma/\sqrt N##
So assuming we have (for simplicity) ##10 \pm 1## and ##12 \pm 1## the final result would be ##11 \pm 1/\sqrt 2##. But if we have ##1000 \pm 1## and ##1200 \pm 1## the result would be ##1100 \pm 1/\sqrt 2##. So the error would be the same in both cases? Shouldn't I account somehow for the fact that in the second case my measurements are so far apart? I was thinking of doing something like this: first get the standard deviation of the samples ##\sigma_1##, (which doesn't care about the individual errors on the data points,) then get the error obtained by error propagation when calculating the mean, ##\sigma_2## (which doesn't care about the distribution of the data points) then add them in quadrature. This way I would take both effects into account. I am just not sure if this is right or how should I adjust it to my problem.
 
  • #16
kelly0303 said:
So assuming we have (for simplicity) ##10 \pm 1## and ##12 \pm 1## the final result would be ##11 \pm 1/\sqrt 2##. But if we have ##1000 \pm 1## and ##1200 \pm 1## the result would be ##1100 \pm 1/\sqrt 2##. So the error would be the same in both cases? Shouldn't I account somehow for the fact that in the second case my measurements are so far apart? I was thinking of doing something like this: first get the standard deviation of the samples ##\sigma_1##, (which doesn't care about the individual errors on the data points,) then get the error obtained by error propagation when calculating the mean, ##\sigma_2## (which doesn't care about the distribution of the data points) then add them in quadrature. This way I would take both effects into account. I am just not sure if this is right or how should I adjust it to my problem.

OK, I sort of see what you're saying, but the sample variance is the wrong statistic to answer the question you're asking implicitly.

When you take a sample mean, you are assuming that all your measurements are of the same quantity, You are assuming there is some unchanging "real" value, and that measurements are disturbed from that by some process which is random and symmetric unless there's some reason to think otherwise (normal distribution being the usual assumption). The sample variance estimates that experimental error that is causing the measured values to be different from the true value, above or below.

So if you had a measurement ##1000 \pm 1##, meaning that you are ##90\%## confident that the actual value is between ##999## and ##1001##, and then you take another measurement and you are ##90\%## confident that the actual value is between ##1199## and ##1201##, then you have a problem. The question you are asking implicitly is "are those measuring the same thing" and that gets you into hypothesis testing, which would give you the result that those are not the same measurement. And then you have to ask if ##\bar x## means anything.

My response was an answer to the narrow question of "what is the variance of ##\bar x## in terms of the variance of individual sample points? If you know the variance of the individual data points, my analysis is valid. The sample variance is what you use as an estimator of the true variance when you don't know it, and you'd have to ask the question with measurements like that as to whether the ##\pm 1## was really a good estimate of the individual variances.
 
  • Like
Likes PeterDonis
  • #17
kelly0303 said:
I am not sure what you mean. In the first case the sigma is 100 so the error on the mean, given by ##\sigma/\sqrt N## is ##100/\sqrt 2 = 70.71## in the other case we have ##1/\sqrt 2 = 0.71##. The results are quite different. Am I miss understanding something here?
I have no idea where you are getting 100 from. If your results are ##10 \pm 3## and ##12 \pm 2## or ##1000 \pm 3## and ##1200 \pm 2## then you can do a simple Monte Carlo simulation (assume normally distributed) to see what the distribution of your mean would be. For 10000 draws of (X+Y)/2 where X~N(10,3) and Y~N(12,2) I got a mean of 11.0 and a standard deviation of 1.79. For 10000 draws of (X+Y)/2 where X~N(1000,3) and Y~N(1200,2) I got a mean of 1100 and a standard deviation of 1.81. So the variance is the same in both cases as close as you can expect with Monte Carlo.

Please run a Monte Carlo simulation yourself. The propagation of errors formula correctly identified that the variance is the same in both cases. You are incorrect in thinking that the variance is widely different in the two cases.
 
  • Like
Likes Klystron
  • #18
kelly0303 said:
Hello! I have some measurements with errors associated with them: ##x_i \pm \delta x_i## and I want to cite the value of the mean with its error.

You aren't specific about the meaning of "error". (What is meant by "the mean" is also ambiguous.)

On one hand, a measuring instrument can have a calibration guarantee such as ##\pm##2%.

On the other hand, a particular person can have a weight that differs from the mean weight of the population of people in his country. The difference between the person's weight and the mean weight of the population isn't necessarily an "error" on the person's part. His weight might be the optimal weight for his health. Calling this difference a "deviation" is clearer terminology.

And yet, on a third hand, a number can be computed from sample data and published as an estimate of the mean weight of the population. The difference between the estimate and the actual mean weight can be considered an "error".

I want to cite the value of the mean with its error.

I know that some people use terminology that is ambiguous or completely screwed-up and yet manage to function well. However, for many people, increasing the precision of terminology helps increase understanding. In talking about statistics, it is a struggle to avoid ambiguity because terms like"the mean", "the standard deviation" etc. are themselves ambiguous.

The two major branches of statistics are 1) Hypothesis testing and 2) Estimation. I think you want to publish an estimate of the population mean (for whatever population you are considering). You also want to publish an estimate of the standard deviation of something. What that thing is, isn't simple to sort out! Statistics is subjective and different fields of study have different traditions. You should ask people in your field of study about traditional ways of computing the numbers you want to publish.

If we ignore tradition, we have to face the fact that problems studied in statistics are conceptually sophisticated and complicated - even when their computations are simple arithmetic. A number such as 1036.8 does not have a standard deviation. It doesn't vary. It is only when we model a random process that generates the number that we can associate a standard deviation with the number. What is the model for the random process that generates the estimate that you wish to publish?

Picking a probability model for a situation is subjective, but unless you have a specific model in mind, there is no objectively correct way to associated a standard deviation with one numerical value generated by that process. Rather than mastering the art of creating probability models, you may find it simpler to investigate traditions!
 
  • #19
Dale said:
I have no idea where you are getting 100 from. If your results are ##10 \pm 3## and ##12 \pm 2## or ##1000 \pm 3## and ##1200 \pm 2## then you can do a simple Monte Carlo simulation (assume normally distributed) to see what the distribution of your mean would be. For 10000 draws of (X+Y)/2 where X~N(10,3) and Y~N(12,2) I got a mean of 11.0 and a standard deviation of 1.79. For 10000 draws of (X+Y)/2 where X~N(1000,3) and Y~N(1200,2) I got a mean of 1100 and a standard deviation of 1.81. So the variance is the same in both cases as close as you can expect with Monte Carlo.

Please run a Monte Carlo simulation yourself. The propagation of errors formula correctly identified that the variance is the same in both cases. You are incorrect in thinking that the variance is widely different in the two cases.
Why would I assume X~N(1000,3)? What I mean is that I make 2 measurements. For the first one I get a value of 1000 with an empirical error (say from equipment systematics) of 3. For the second one I get 1200 with an empirical error of 2. Both numbers come from the same distribution (which I don't know), not from 2 different distributions. The empirical value of the mean is 1100 and the empirical value of the sigma (ignoring the error on each individual measurement) is $$\sqrt{(1200-1100)^2+(1100-1000)^2}=100\sqrt 2$$ In the other case the empirical value of the sigma is ##\sqrt 2##. They are 2 orders of magnitude apart, which is quite a lot. Is my logic flawed? So what I am trying to figure out is how to take into account, in the end, both this sigma given by how far away my 2 measurements are, but also how big is the error on each measurement.
 
  • #20
Stephen Tashi said:
You aren't specific about the meaning of "error". (What is meant by "the mean" is also ambiguous.)

On one hand, a measuring instrument can have a calibration guarantee such as ##\pm##2%.

On the other hand, a particular person can have a weight that differs from the mean weight of the population of people in his country. The difference between the person's weight and the mean weight of the population isn't necessarily an "error" on the person's part. His weight might be the optimal weight for his health. Calling this difference a "deviation" is clearer terminology.

And yet, on a third hand, a number can be computed from sample data and published as an estimate of the mean weight of the population. The difference between the estimate and the actual mean weight can be considered an "error".
I know that some people use terminology that is ambiguous or completely screwed-up and yet manage to function well. However, for many people, increasing the precision of terminology helps increase understanding. In talking about statistics, it is a struggle to avoid ambiguity because terms like"the mean", "the standard deviation" etc. are themselves ambiguous.

The two major branches of statistics are 1) Hypothesis testing and 2) Estimation. I think you want to publish an estimate of the population mean (for whatever population you are considering). You also want to publish an estimate of the standard deviation of something. What that thing is, isn't simple to sort out! Statistics is subjective and different fields of study have different traditions. You should ask people in your field of study about traditional ways of computing the numbers you want to publish.

If we ignore tradition, we have to face the fact that problems studied in statistics are conceptually sophisticated and complicated - even when their computations are simple arithmetic. A number such as 1036.8 does not have a standard deviation. It doesn't vary. It is only when we model a random process that generates the number that we can associate a standard deviation with the number. What is the model for the random process that generates the estimate that you wish to publish?

Picking a probability model for a situation is subjective, but unless you have a specific model in mind, there is no objectively correct way to associated a standard deviation with one numerical value generated by that process. Rather than mastering the art of creating probability models, you may find it simpler to investigate traditions!
Thank you for your reply and I am sorry if I was not clear. What I mean by error is any uncertainty associated with one single measurement. As a concrete example, say I measure the length of of object with a ruler that can't go lower than 1 (in some arbitrary units). So for any measurement my error is 1. Now let's say that I do 2 measurements getting ##100 \pm 1## and ##110 \pm 1##. The error that I mentioned above is this "1" obtained from each measurement (the error associated to an individual data point). What I am trying to figure out is, assuming that I say that the length of the object is the mean i.e. 105, what error should I associate with this value? Sorry again if i was not clear before.
 
  • #21
kelly0303 said:
Why would I assume X~N(1000,3)? What I mean is that I make 2 measurements. For the first one I get a value of 1000 with an empirical error (say from equipment systematics) of 3. For the second one I get 1200 with an empirical error of 2. Both numbers come from the same distribution (which I don't know), not from 2 different distributions.
It is completely meaningless to say that they come from the same distribution and then give different variances for the two measurements. If they come from the same distribution then they must have the same variance.
 
  • Like
Likes Klystron and Stephen Tashi
  • #22
Dale said:
It is completely meaningless to say that they come from the same distribution and then give different variances for the two measurements.
Why would that be. Continuing my example I gave above, say that I measure the length with a ruler that can't go below 1 and I get ##100 \pm 1## then I measure the same object with a ruler that can't go below 5, and I get ##110 \pm 5##. The measurements of the length will form a gaussian distribution, but the error I get on each individual measurement has to do with my measuring instrument, not with the original distribution.
 
  • #23
kelly0303 said:
Why would that be. Continuing my example I gave above, say that I measure the length with a ruler that can't go below 1 and I get ##100 \pm 1## then I measure the same object with a ruler that can't go below 5, and I get ##110 \pm 5##. The measurements of the length will form a gaussian distribution, but the error I get has to do with my measuring instrument, not with the original distribution.
The measurements with those two rulers would clearly not have the same distribution.
 
  • #24
Dale said:
The measurements with those two rulers would clearly not have the same distribution.
ok... but still, how do I combine them into one result.
 
  • #25
kelly0303 said:
ok... but still, how do I combine them into one result.

You have to be specific about the interpretation of the number you wish to publish. Your example of the "uncertainty" in measuring with a ruler doesn't involve probability. If the ruler can be read to ##\pm## a tenth of an inch, then a measurement with the ruler is guanteed to be within that distance of the actual value. There is no probability associated with that guarantee - except that it is 100% probable that it's true.

"Uncertainty" is an ambiguous term. How is it interpreted in your field of study? Is it supposed the be the standard deviation of a random variable? Or is it supposed to give an absolute guarantee about something?

To repeat, there is no objective answer to your question unless the question is converted to an unambiguous form. To make that conversion, you must appreciate the conceptual sophistication of statistics.
 
  • #26
kelly0303 said:
ok... but still, how do I combine them into one result.
How you combine them depends on the application, but taking the mean is a reasonable approach. If you have one measurement which is X~N(100,1) and another measurement that is Y~N(110,5) then the mean will be approximately N(105,2.5). This is exactly what is expected with the propagation of errors formula.

Perhaps you mean that your measurements would be X~U(99.5,100.5) and Y~U(107.5,102.5) and you are objecting to using the normal distribution for these? I am not certain if the propagation of errors formula depends on the specific distribution.
 
  • #27
@kelly0303 I think that I may be misunderstanding your situation. Perhaps it would help to look at the NIST guide for evaluating uncertainty. They do a really good job of distilling the complicated topic of measurement uncertainty into something approachable.

https://www.nist.gov/sites/default/files/documents/2017/05/09/tn1297s.pdf

I think that especially section 4.6 may be relevant here. Also note section 5.1 advocates the use of the propagation of errors formula for combining uncertainties.
 
  • #28
Stephen Tashi said:
You have to be specific about the interpretation of the number you wish to publish. Your example of the "uncertainty" in measuring with a ruler doesn't involve probability. If the ruler can be read to ##\pm## a tenth of an inch, then a measurement with the ruler is guanteed to be within that distance of the actual value. There is no probability associated with that guarantee - except that it is 100% probable that it's true.

"Uncertainty" is an ambiguous term. How is it interpreted in your field of study? Is it supposed the be the standard deviation of a random variable? Or is it supposed to give an absolute guarantee about something?

To repeat, there is no objective answer to your question unless the question is converted to an unambiguous form. To make that conversion, you must appreciate the conceptual sophistication of statistics.
Hmmm, ok I will try to be more specific (I am sorry, I don't know much about statistics, so I hope this will help). Say we have a source that produces a signal (in arbitrary units) of mean 1000 and standard deviation 100. And I have a measuring device with a resolution of 50 and another one with resolution of 200. I do one measurement with each of them and I get: ##900 \pm 50## and ##1100 \pm 200##. How should I properly combine these 2 measurements? Please let me know if I need to give more details.
 
  • #29
kelly0303 said:
I do one measurement with each of them
OK, since you are doing one measurement with each of them then that is definitely a "type B" uncertainty. I think that the section 4.6 is what you want.
 
  • #30
Ok, so having reviewed the NIST document and your post I think that I understand the “official” procedure.

kelly0303 said:
Say we have a source that produces a signal (in arbitrary units) of mean 1000 and standard deviation 100.
Ok, so this is a type A uncertainty with a standard uncertainty of ##u_s = 100##. If you are trying to measure the mean of the signal then this uncertainty contributes to the uncertainty of the measurement. But if you are trying to measure an individual value of this signal then this uncertainty is not relevant since it is part of the measurand.

kelly0303 said:
And I have a measuring device with a resolution of 50
This is then a type B uncertainty with a standard uncertainty ##u_1 = 50/2\sqrt{3}=14##

kelly0303 said:
and another one with resolution of 200
Which is a type B standard uncertainty of ## u_2 = 200/2\sqrt{3} = 58##
kelly0303 said:
I do one measurement with each of them and I get: 900±50 and 1100±200. How should I properly combine these 2 measurements?
So if your goal is to measure the individual signal value you would use the propagation of errors. For that the combined uncertainty is ##u_c = \sqrt{u_1^2+u_2^2}/2 = 30##

But if your goal was to measure the mean of the signal then I am not certain, but I think that the combined uncertainty would be ##u_c = \sqrt{u_s^2 + u_1^2/4 + u_2^2/4} = 104##

I am not confident about that last one.
 
  • #31
Dale said:
Ok, so having reviewed the NIST document and your post I think that I understand the “official” procedure.

Ok, so this is a type A uncertainty with a standard uncertainty of ##u_s = 100##. If you are trying to measure the mean of the signal then this uncertainty contributes to the uncertainty of the measurement. But if you are trying to measure an individual value of this signal then this uncertainty is not relevant since it is part of the measurand.

This is then a type B uncertainty with a standard uncertainty ##u_1 = 50/2\sqrt{3}=14##

Which is a type B standard uncertainty of ## u_2 = 200/2\sqrt{3} = 58##
So if your goal is to measure the individual signal value you would use the propagation of errors. For that the combined uncertainty is ##u_c = \sqrt{u_1^2+u_2^2}/2 = 30##

But if your goal was to measure the mean of the signal then I am not certain, but I think that the combined uncertainty would be ##u_c = \sqrt{u_s^2 + u_1^2/4 + u_2^2/4} = 104##

I am not confident about that last one.
Thank you for this! I actually found this: https://ned.ipac.caltech.edu/level5/Leo/Stats4_5.html I think this is what I was looking for.
 
  • #33
Dale said:
That seems good. It still uses the propagation of errors, but in a way that reduces the overall variance.
I still think something is missing (and I realize now my title was misleading, I am sorry for that). This gives you the error on the mean, i.e. how confident you are about the mean value. However if I am trying to approximate the real distribution with my measurements (which I assume it is why experimentalists are trying to do in general), I need to add on top of the error the standard deviation of the samples themselves?
 
  • #34
kelly0303 said:
I still think something is missing
So my recommendation when faced with situations that you are having trouble figuring out: Monte Carlo. At a minimum it let's you give any theoretical calculations a bit of a reality check.

kelly0303 said:
if I am trying to approximate the real distribution with my measurements
There are two approaches that I know. One is to assume some class of parametric distributions and then use the data to estimate the parameters. The other is to simply use the empirical distribution. The empirical distribution is non parametric but has known error bounds, so I like it.
 
  • #35
kelly0303 said:
Hmmm, ok I will try to be more specific (I am sorry, I don't know much about statistics, so I hope this will help). Say we have a source that produces a signal (in arbitrary units) of mean 1000 and standard deviation 100. And I have a measuring device with a resolution of 50 and another one with resolution of 200. I do one measurement with each of them and I get: ##900 \pm 50## and ##1100 \pm 200##. How should I properly combine these 2 measurements? Please let me know if I need to give more details.

Asking about the "proper" way to "combine" measurements is not a well defined mathematical question. If you don't want to tackle the sophisticated concepts involved in statistics, find some authority that has done similar work and copy what they did.

A slightly inferior version of that approach is to find people who will cross examine you until they can guess how to create a probability model for your problem and provide a solution based on that guess. If you want to pursue that route, let's try to formulate a specific question.

1. What is the population you are considering and what is being measured about it? Define this precisely. (e.g. The population of males between the ages of 20 and 30 in the state to Tennessee and their weights measured in pounds.)

2. Are you assuming the distribution of this population comes from a particular family of probability distributions? If so, what family of distributions? (e.g. lognormal)

3, Apparently you want to estimate some property of that population. What property is it? Is it one parameter of the distribution of the population? - or is it more than one parameter? - enough parameters to define the entire distribution function?

4. How is the population being sampled? Is it randomly sampled such that each member of the population has the same probability of being included in a sample? - or is it sampled in some systematic way? (e.g. pick 10 males at random versus pick 1 male at random from each of the age groups 21,22,23,...29.)

In your example, above, if I make up a population and make up a distribution for it, I still don't have information about how the two samples were selected. In particular, did the sampling process involve both picking a measuring instrument and a source at random? Or did the experimenter have two given measuring instruments and decide to use both of them? If so, were both used on the same source or were they used on two possibly different sources taken from the population of sources?5. To estimate a parameter of distribution, some algorithm is performed on a random sample of measurements. A result of such an algorithm is technically called a "statistic". When a "statistic" is used to estimate a parameter of a distribution, the statistic is called an "estimator". Statistics and estimators are random variables because they depend on the random values in samples. A statistic computed from a sample taken from a population usually does not have the same distribution of values as the population. (e.g. Suppose the population has a lognormal distribution. Suppose the statistic is defined by the algorithm "Take the mean value of measurements from 10 randomly selected individuals". The distribution of this statistic is not lognormal. )

Since statistics are random variables they have their own distributions, these distributions have their own parameters (e.g. mean, variance ) that can be different that the values of similar parameters in the distribution of the population. So it makes sense to talk about things like "the mean of the sample mean", "the variance of the sample mean"

However if I am trying to approximate the real distribution with my measurements
The distribution of what? The population has a distribution. The sample mean has a different distribution.
 

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
905
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
860
  • Set Theory, Logic, Probability, Statistics
Replies
13
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
816
  • Set Theory, Logic, Probability, Statistics
Replies
25
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
21
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
22
Views
1K
Back
Top