How do I calculate the confidence interval for a set of data using excel?

  • Thread starter hoodrych
  • Start date
  • Tags
    Data Set
In summary, the speaker is using excel to calculate the 95% confidence interval for the percent error of a system. They initially tried using the "=CONFIDENCE" formula but realized it calculates the CI of the mean. They are looking for a formula that will give them the 95% CI for the actual percent error values, not the mean. They mention the "=NORMDIST" and "=NORMINV" functions and ask for clarification on how to use them. They also mention wanting to calculate the value at .975 of the curve to the left and right to determine the CI range. They thank the expert for their help and provide additional information about their data being a normal distribution.
  • #1
hoodrych
17
0
I'm using excel to calculate this, but I have a few questions.I have a list which has 48 values, one per day, for the percent error of something (not important to the question).

I

I need to find the 95% confidence interval, but not of the mean. I used the "=CONFIDENCE" formula that comes with excel, but found out that it actually calculates the CI of the mean.

I need to be 95% confident that the percent error will fall between xx% and xx% in the future. Not 95% confident that the mean percent error will fall between xx% and xx%.

There's a formula in excel, which I think I remember seeing in my TI-84, "=NORMDIST(x,mean,standard_dev,cumulative)". I think i need to use this, right? What exactly does it do? I know how to find stdev and mean, I'm not sure what I put in the "x", and for cumulative I know i put either "TRUE" or "FALSE"-but I don't know which one i should be putting.
EDIT: I think I'm actually supposed to use the "=NORMINV(p, mu, sigma)" function. What would this give me. How would i find out p in my situation? It's not a textbook like problem...

How would I go about calculating the value at .975 of the curve to the left. Then calculating the value at .975 of the curve to the right. (or just .25 to the left again, ect.)That would give me the CI range wouldn't it?

Any help is much appreciated, and as soon as possible would be nice.

Thanks!
 
Last edited:
Physics news on Phys.org
  • #3
Stephen Tashi said:
I don't use MS Windows, but looking at the page http://www.exceluser.com/explore/statsnormal.htm, I think what you want is

x1 = NORMINV(.025,mean,standard_dev) and
x2 = NORMINV(.975,mean,standard_dev).

So that would find the middle 95% of my set of data. But is this what I am wanting? Is the value that this gave me the confidence range? I need to do a C.I. and I think this is different.
 
  • #5
bamp
 
  • #6
I don't think you know what a "confidence interval" is and it's a complicated topic. I think what you want is an interval with two numerical endpoints such that you can claim that there is a 95% chance that a future single measurement will fall within that interval. There is no answer to this unless you fit a distribution to the data. If you use NORMINV(...) this means that your accept that the data has a normal distribution with the mean and std_deviation that you supply. It is customary to calculate such intervals centered about the mean. There is a 100% chance that such an interval contains the mean. It's in the middle of the interval after all.

If you are talking about a confidence interval for the mean of a different set of 48 measurements taken from a normal distribution with unknown mean and a given standard deviation, that's a different problem.
 
  • #7
hoodrych said:
I'm using excel to calculate this, but I have a few questions.


I have a list which has 48 values, one per day, for the percent error of something (not important to the question).

I

I need to find the 95% confidence interval, but not of the mean. I used the "=CONFIDENCE" formula that comes with excel, but found out that it actually calculates the CI of the mean.

I need to be 95% confident that the percent error will fall between xx% and xx% in the future. Not 95% confident that the mean percent error will fall between xx% and xx%.

There's a formula in excel, which I think I remember seeing in my TI-84, "=NORMDIST(x,mean,standard_dev,cumulative)". I think i need to use this, right? What exactly does it do? I know how to find stdev and mean, I'm not sure what I put in the "x", and for cumulative I know i put either "TRUE" or "FALSE"-but I don't know which one i should be putting.


How would I go about calculating the value at .975 of the curve to the left. Then calculating the value at .975 of the curve to the right. (or just .25 to the left again, ect.)That would give me the CI range wouldn't it?

Any help is much appreciated, and as soon as possible would be nice.

Thanks!

The CI would be PERCENTILE(data,0.025) to PERCENTILE(data,0.975)
 
  • #8
Before you begin reading, I want to say thank you for helping, or attempting to help me. I really appreciate any help you can give me! Also, warning: Wall of Text approaching fast.


Stephen Tashi said:
I think what you want is an interval with two numerical endpoints such that you can claim that there is a 95% chance that a future single measurement will fall within that interval.
Yes. I want, "We are 95% confident that future value will fall in between xxx and xxx".


There is no answer to this unless you fit a distribution to the data. If you use NORMINV(...) this means that your accept that the data has a normal distribution with the mean and std_deviation that you supply. It is customary to calculate such intervals centered about the mean. There is a 100% chance that such an interval contains the mean. It's in the middle of the interval after all.

I mean, I think my data is a normal distribution? Here, let me explain exactly what I am doing in detail:

Two systems, let's call them #1 and #2.
EACH has TWO lists of 48 values, one of the mkt price in USD and another in the Native currency.

1. I found out the exchange rate to the dollar for each system. (so: #1Native/#1USD and #2Native/#2USD). So now I have TWO lists of 48 numbers, showing the exch. rate for each system.

2. I then find percent error of system #2, since system #1 is the accurate one.
[[ ( (#2native/#2USD)-(#1native/#1USD) ) / (#1native/#1USD) ]] * 100

3. So now i had 1 column of 48 values showing the % error in exch rate of system #2. But there were some negatives, and so I thought, "It doesn't make sense to have a negative % error", right?? So I squared every value, then put it to the power of 1/2, so as to only get positive numbers. Question: Is this the correct thing to do? I had some values at like, -.5, but after ^2 and ^1/2 they are obviously +.5 . I'm having second thoughts...

4. I then calculated stdev and mean of the 48 points.

So are these 48 values a normal distribution? The two systems generate a new value every day. I have the previous 48 days. I need to make sure that system #2's % error is not significantly large, and that in the future, if we get a huge miscalculation by system #2 that it has just fallen into the 5% (calculating 95% CI).

Moving on:

There is a function in excel that calculates the "confidence interval for a population mean". I came up to the person who assigned me this project and asked why I was getting very strange value when I tried to use this. He said that the =CONFIDENCE function in excel calculates it through another method, something about the mean, which we do not need to do. So basically, I don't think I am supposed to calculate the intervals in the "customary fashion centered about the mean". What is the alternative to calculating the CI centered around the mean?


If you are talking about a confidence interval for the mean of a different set of 48 measurements taken from a normal distribution with unknown mean and a given standard deviation, that's a different problem.


I'm not particularly talking about a CI for the mean of a DIFFERENT set of 48 measurements, but for any future measurements.--Also, as I said above, I don't think I'm supposed to be calculating the CI for the mean.



Code:
=NORMINV(.025,mean,standard_dev) [[Which returned the value -0.135597829]]
=NORMINV(.975,mean,standard_dev) [[Which returned the value 0.769904]]

So doesn't just give me the middle 95% of the sample (so the 48 values)? So assuming a normal distribution(like i mentioned previously, is this a normal distribution?), would the conclusion be written out in a manner such as: "About 95% of future value results will fall in between 0.16134 and 0.7699"?



The CI would be PERCENTILE(data,0.025) to PERCENTILE(data,0.975)

What is the difference between THIS and the =NORMINV one's I gave above?
Microsoft defines =PERCENTILE as "Returns the k-th percentile of values in a range." Microsoft defines =NORMINV as "returns the value x such that, with probability p, a normal random variable with mean mu and standard deviation sigma takes on a value less than or equal to x."
Aren't they both essentially the same thing??!
 
  • #9
hoodrych said:
...What is the difference between THIS and the =NORMINV one's I gave above?
Microsoft defines =PERCENTILE as "Returns the k-th percentile of values in a range." Microsoft defines =NORMINV as "returns the value x such that, with probability p, a normal random variable with mean mu and standard deviation sigma takes on a value less than or equal to x."
Aren't they both essentially the same thing??!

No they're completely different. PERCENTILE takes the values directly from a data set (i.e. the empirical distribution) whereas NORMINV is from an assumed/fitted distribution and requires separate parameter estimates.
 
  • #10
hoodrych said:
Yes. I want, "We are 95% confident that future value will fall in between xxx and xxx".



The problem here is to distinguish between terms like "confident" as use in mathematical statistics and the word "confident" as used by ordinary people. Will the readers of this report expect you to use the term like a statistician or will they be laymen who assume it means "there is a 95% probability that a randomly chosen future value will fall between the limits xxx and yyy"? If your readers understand statistics, you better read about "confidence intervals" before attempting to use that term.


I mean, I think my data is a normal distribution[COLOR="R[ed"]?[/COLOR]

The other thread that graphed data did not show it in a form where anybody could eyball the data to suggest a distribution. You should plot the data in the form of a cumulative histogram.

Two systems, let's call them #1 and #2.
EACH has TWO lists of 48 values, one of the mkt price in USD and another in the Native currency.

1. I found out the exchange rate to the dollar for each system. (so: #1Native/#1USD and #2Native/#2USD). So now I have TWO lists of 48 numbers, showing the exch. rate for each system.

2. I then find percent error of system #2, since system #1 is the accurate one.
[[ ( (#2native/#2USD)-(#1native/#1USD) ) / (#1native/#1USD) ]] * 100

3. So now i had 1 column of 48 values showing the % error in exch rate of system #2. But there were some negatives, and so I thought, "It doesn't make sense to have a negative % error", right?? So I squared every value, then put it to the power of 1/2, so as to only get positive numbers. Question: Is this the correct thing to do?

Doing statistics and probability problems for non-mathematician clients or bosses usually turns into an exercise in mind-reading. I doubt that taking the absolute value of the data (which is what squaring and taking the square root does) is the proper thing to do. From a mathematical point of view, it is quite possible to deal with a distribution that produces both negative and positive numbers.

I had some values at like, -.5, but after ^2 and ^1/2 they are obviously +.5 . I'm having second thoughts...

4. I then calculated stdev and mean of the 48 points.

So are these 48 values a normal distribution? The two systems generate a new value every day. I have the previous 48 days. I need to make sure that system #2's % error is not significantly large, and that in the future, if we get a huge miscalculation by system #2 that it has just fallen into the 5% (calculating 95% CI).

What you want is called a "prediction interval". This is not the same as a "confidence interval" - at least to statisticians.
Moving on:

There is a function in excel that calculates the "confidence interval for a population mean". I came up to the person who assigned me this project and asked why I was getting very strange value when I tried to use this. He said that the =CONFIDENCE function in excel calculates it through another method, something about the mean, which we do not need to do.

From what I read on the web, the person is correct. Realize that you don't know the true mean value of all future data. You only know the mean of a sample of the data. So if you tackle the problem of predicting the true mean, you get into territory where the CONFIDENCE function would be relevant.

So basically, I don't think I am supposed to calculate the intervals in the "customary fashion centered about the mean". What is the alternative to calculating the CI centered around the mean?

It's actually very hard to rigorously justify using a particular interval for confidence or prediction intervals. But it is intuitively appealing to include the mean in the interval if the frequency histogram of your data has a peak near the mean. So I suggest you not rebel against having the sample mean in the center of the interval unless the most frequent values of the data are nowhere near it.

I'm not particularly talking about a CI for the mean of a DIFFERENT set of 48 measurements, but for any future measurements.--Also, as I said above, I don't think I'm supposed to be calculating the CI for the mean.

This confirms that what you want is a "prediction interval", not a CI.

Code:
=NORMINV(.025,mean,standard_dev) [[Which returned the value -0.135597829]]
=NORMINV(.975,mean,standard_dev) [[Which returned the value 0.769904]]

So doesn't just give me the middle 95% of the sample (so the 48 values)? So assuming a normal distribution(like i mentioned previously, is this a normal distribution?), would the conclusion be written out in a manner such as: "About 95% of future value results will fall in between 0.16134 and 0.7699"?

Essentially Yes. Your statement that it gives "the middle 95% of the sample" may be wrong since the computations are done in an indirect way. They essentially fit a normal distribution to the data and then compute a 95% prediction interval. This may or may contain 95% of the data in the sample.

What is the difference between THIS and the =NORMINV one's I gave above?
Microsoft defines =PERCENTILE as "Returns the k-th percentile of values in a range." Microsoft defines =NORMINV as "returns the value x such that, with probability p, a normal random variable with mean mu and standard deviation sigma takes on a value less than or equal to x."
Aren't they both essentially the same thing??!

No, they aren't the same. The PERCENTILE method makes a prediction assuming that the data is exactly the distribution of future values. If you make this assumption, you can say that 95% of the data fall in the interval you get. It is a simple way to present the results.

The NORMINV method first fits a normal distribution to the data and then makes a prediction assuming that this fit is correct.

I suggest the following: Don't take the absolute value of the data. Plot the data as a cumulative histogram. Use both the NORMINV and PERCENTILE methods to get prediction intervals and see if there is much difference between them.
 
Last edited:
  • #11
No they're completely different. PERCENTILE takes the values directly from a data set (i.e. the empirical distribution) whereas NORMINV is from an assumed/fitted distribution and requires separate parameter estimates.

Then PERCENTILE is not what I want? What do you mean by separate parameter estimates? I need to make sure the exchange rate % error is not going to be a problem. My boss can determine if the % error will cause a problem if I can give him with 95% confidence what future values will lie between.


The problem here is to distinguish between terms like "confident" as use in mathematical statistics and the word "confident" as used by ordinary people. Will the readers of this report expect you to use the term like a statistician or will they be laymen who assume it means "there is a 95% probability that a randomly chosen future value will fall between the limits xxx and yyy"? If your readers understand statistics, you better read about "confidence intervals" before attempting to use that term.

The readers are actually statisticians, they are swamped at the moment so I'm taking on the task. So saying, "we are 95% confident that ..." means what in statistician terms? I can't recall for the life of me.
Two systems, let's call them #1 and #2.
EACH has TWO lists of 48 values, one of the mkt price in USD and another in the Native currency.

1. I found out the exchange rate to the dollar for each system. (so: #1Native/#1USD and #2Native/#2USD). So now I have TWO lists of 48 numbers, showing the exch. rate for each system.

2. I then find percent error of system #2, since system #1 is the accurate one.
[[ ( (#2native/#2USD)-(#1native/#1USD) ) / (#1native/#1USD) ]] * 100

3. So now i had 1 column of 48 values showing the % error in exch rate of system #2. But there were some negatives, and so I thought, "It doesn't make sense to have a negative % error", right?? So I squared every value, then put it to the power of 1/2, so as to only get positive numbers. Question: Is this the correct thing to do?
Doing statistics and probability problems for non-mathematician clients or bosses usually turns into an exercise in mind-reading. I doubt that taking the absolute value of the data (which is what squaring and taking the square root does) is the proper thing to do. From a mathematical point of view, it is quite possible to deal with a distribution that produces both negative and positive numbers.

... I'm no statistician, but it's to my understanding that a percentage by definition must be between 0 and 100. Especially in the context of this situation - dealing with percent error. I mean, how could I have a negative percent error?


In step 2: [[ ( (#2native/#2USD)-(#1native/#1USD) ) / (#1native/#1USD) ]] * 100


Whenever I got a negative value it was because the (#2native/#2USD) under-approximated the actual exchange rate (#1native/#1USD). [remember system #1 is the correct exchange rate] Under-approximating 2 units for example is just as bad as over-approximating 2 units, since all I'm interested in is far away from 0 the error was.

I'll wait for your confirm/deny before I and compute the data again without taking the absolute value.


It's actually very hard to rigorously justify using a particular interval for confidence or prediction intervals. But it is intuitively appealing to include the mean in the interval if the frequency histogram of your data has a peak near the mean. So I suggest you not rebel against having the sample mean in the center of the interval unless the most frequent values of the data are nowhere near it.

I'm not sure what you mean exactly. But I was told to do a 95% confidence interval, so there's no changing that.



Code:
=NORMINV(.025,mean,standard_dev) [[Which returned the value -0.135597829]]
=NORMINV(.975,mean,standard_dev) [[Which returned the value 0.769904]]
So doesn't just give me the middle 95% of the sample (so the 48 values)? So assuming a normal distribution(like i mentioned previously, is this a normal distribution?), would the conclusion be written out in a manner such as: "About 95% of future value results will fall in between 0.16134 and 0.7699"?


Essentially Yes. Your statement that it gives "the middle 95% of the sample" may be wrong since the computations are done in an indirect way. They essentially fit a normal distribution to the data and then compute a 95% prediction interval. This may or may contain 95% of the data in the sample.

How/why are the computations done in an indirect way? Are you saying that the reason they may or may not contain 95% of the data in the sample is because my data may NOT be a normal distribution curve, in which case it would NOT contain 95% of the data.

So you're saying that what I have done is compute a PREDICTION INTERVAL (which as you said previously, is what I wanted), but the method I used to get it only works with populations that have a normal distribution. (Quick side-question: Is my "population" the 48 values from the sample, or the values that come out each day after stock market close in the future).


I suggest the following: Don't take the absolute value of the data. Plot the data as a cumulative histogram. Use both the NORMINV and PERCENTILE methods to get prediction intervals and see if there is much difference between them.

Like i said earlier, I'll wait for you to read my post about justifying absolute value of the data before doing it.

Instead of getting prediction intervals for both NORMINV and PERCENTILE wouldn't it be easier if I could just find out what KIND of distribution my data has, so I can accurately conduct a test that finds the Prediction Intervals(or CI, w/e).??
 
Last edited:
  • #12
The readers are actually statisticians, they are swamped at the moment so I'm taking on the task. So saying, "we are 95% confident that ..." means what in statistician terms? I can't recall for the life of me.

Since you are not a statistician and your readers are, I advise you not make claims about "confidence" and "confidence intervals" in your report. If you insist on using the term "confidence" you should look it up in a textbook or on a respectable web page. It is a technically complicated subject. Suffice it to say that the statement "-0.13 to .76 is a 95% confidence interval" does not mean that there is a 95% chance a random future data point will fall in that interval.


... I'm no statistician, but it's to my understanding that a percentage by definition must be between 0 and 100. Especially in the context of this situation - dealing with percent error. I mean, how could I have a negative percent error?

"Percentage" may have to be between 0 and 100 but "Percentage error" doesn't. The way you calculated it (before taking the absolute value) looks OK to me.

Under-approximating 2 units for example is just as bad as over-approximating 2 units, since all I'm interested in is far away from 0 the error was.

The question is whether the sign of the error matters to your audience, not whether it matters to you. For example, suppose they try to make money cycling money through both systems. A plus error might have a different financial consequence than a minus error.

I was told to do a 95% confidence interval, so there's no changing that.

Maybe the person who assigned you this tasks wasn't speaking carefully or wasn't a mathematical statistician. Maybe they are just somebody whose job title says "statistician" and they really wanted a prediction interval.


How/why are the computations done in an indirect way? Are you saying that the reason they may or may not contain 95% of the data in the sample is because my data may NOT be a normal distribution curve, in which case it would NOT contain 95% of the data.

You apparently do not understand the difference between a distribution (such as the normal distribution) and a sample of data. It is quite unlikely that a sample of data randomly selected from a distribution will perfectly reflect all the properties of the distribution. If you fit a normal distribution to your data by setting the mean of the data equal to the mean of the distribution and the standard deviation of the data equal to the standard deviation of the distribution then other things about the data need not match the properties of the distribution. In particular, the fraction of sample values that fall between xxx and yyy in the sample may not match the probability that a value chosen from the distribution falls in that interval.

Fitting distributions to data is a subjective matter. People fit distributions to data by using parameters such as the mean and standard deviation when they feel that the sample mean and sample standard deviation are likely to provide good estimates.


So you're saying that what I have done is compute a PREDICTION INTERVAL (which as you said previously, is what I wanted), but the method I used to get it only works with populations that have a normal distribution.

NORMINV only works for normal distributions. Prediction intervals can be computed for non-normal distributions, but you'd have to use dfferent functions.

(Quick side-question: Is my "population" the 48 values from the sample, or the values that come out each day after stock market close in the future).

In your problem, I think the population is all values that could ever come out, on any day.


Instead of getting prediction intervals for both NORMINV and PERCENTILE wouldn't it be easier if I could just find out what KIND of distribution my data has, so I can accurately conduct a test that finds the Prediction Intervals(or CI, w/e).??

That would be the usual procedure and if you have time to learn how to fit distributions to data, you should do this.
 
  • #13
"Percentage" may have to be between 0 and 100 but "Percentage error" doesn't. The way you calculated it (before taking the absolute value) looks OK to me.

How can you have less than 0 percentage error? 0 % error means the value is correct. How can something be so correct that is has a negative percent error...

Can you please explain why "percentage error" does not have to be between 0 and 100 and "percentage" does, in this scenario? And also your reasoning behind not taking the absolute value?Edit:

Found the answer.
[PLAIN]http://www.pstcc.edu/departments/natural_behavioral_sciences/E2010D0101.gif

It's not possible to have a negative percent error.
 
Last edited by a moderator:
  • #14
I did a few best fit distributions. None of them look like they correlate.

Linear: -.0213
Power: .0282
Polynomial: .0468
Logarithmic: .0055
Exponential: .0419

None of those r2 are even close. grr
 
  • #15
hoodrych said:
Can you please explain why "percentage error" does not have to be between 0 and 100 and "percentage" does, in this scenario?
I'm not saying that any arbitrarily defined quantity "has to be" one way or another. I'm saying that I can see cases where a person might want to know if the error is too high or too low, so allowing negative percentage errors makes sense.

Edit:

Found the answer.
[PLAIN]http://www.pstcc.edu/departments/natural_behavioral_sciences/E2010D0101.gif

It's not possible to have a negative percent error.

You can find other sources on the web that use negative percentage errors. Look for the topic "negative percentage error".
 
Last edited by a moderator:
  • #17
You're still mis-using the term "confidence interval". You're still taking the absolute value of the error and you haven't shown a cumulative histogram of the data. So I won't repeat myself in the other thread.
 
  • #18
Stephen Tashi said:
You're still mis-using the term "confidence interval". You're still taking the absolute value of the error and you haven't shown a cumulative histogram of the data. So I won't repeat myself in the other thread.

I'm sorry, you're right. It's just that my co worker said CI when giving me the assignment. It looks like he misused the term though.

I'm not saying that any arbitrarily defined quantity "has to be" one way or another. I'm saying that I can see cases where a person might want to know if the error is too high or too low, so allowing negative percentage errors makes sense.

I know that not taking the absolute value can be useful in some situations. I have been asking you about whether taking the absolute value in this scenario was the right thing to do, not if taking the abs value must be done every time when calculating % error...

Do you agree with taking the abs value or not when calculating percent error in this scenario ONLY. And if not why...


and you haven't shown a cumulative histogram of the data.

I have been working on getting this done. On a side note though: I remember in my high school Stats class that there was a different way we go about getting the distribution/best-fit. Unfortunately I don't even remember what that method was.
 

1. What is the purpose of finding the confidence interval of a set of data?

The purpose of finding the confidence interval (C.I.) of a set of data is to estimate the range in which the true population mean or proportion lies with a certain level of confidence. It helps us to better understand the accuracy and precision of our sample data in representing the entire population.

2. How do you calculate the confidence interval of a set of data?

The confidence interval is calculated by taking the mean of the sample data and adding or subtracting the margin of error, which is determined by the standard error and the desired level of confidence. The formula for calculating the confidence interval varies depending on the type of data and the sample size, but it typically follows the formula: mean ± (critical value * standard error).

3. What is the significance of the confidence level in finding the C.I.?

The confidence level, typically denoted as a percentage (e.g. 95% confidence level), represents the probability that the true population mean or proportion falls within the calculated confidence interval. In other words, the higher the confidence level, the more confident we can be that the true value lies within the reported range.

4. How does the sample size affect the C.I.?

The sample size plays a crucial role in determining the width of the confidence interval. In general, a larger sample size will result in a narrower confidence interval, as it provides more precise and accurate estimates of the population parameters.

5. What are some limitations of using the C.I. to estimate population parameters?

One limitation of the confidence interval is that it assumes that the sample data is normally distributed, and that the sample is representative of the entire population. In reality, this may not always be the case, which can affect the accuracy of the estimated interval. Additionally, the confidence interval only provides an estimate of the population parameter and cannot guarantee its exact value.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
22
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
710
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
979
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
883
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
3K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
3K
Back
Top