Question about random variables

In summary, we discussed the concept of random variables and how they can be used to estimate basic parameters of a distribution. We also talked about the idea of independent and identically distributed (IID) random variables and how they are crucial in classical statistics. Finally, we clarified the difference between independent samples from a single random variable and samples from different random variables.
  • #1
musicgold
304
19
I think I understand the concept of random variable (for example, the number of heads when three coins are tossed together or the temperature of a place at 6.00am every morning).

I am, however, confused as I have seen some material which refers even the values taken by a random variable (or instances) as random variables. For example, consider the text from a PowerPoint presentation. The second part, for example, calls the members of a sample as independent variables.

How should I think about this?

Thanks.


Text from a presentation.
“Suppose we are given a random variable X with some unknown probability distribution. We want to estimate the basic parameters of this distribution, like the expectation of X and the variance of X. The usual way to do this is to observe n independent variables all with the same distribution as X”

“Let X1,X2,…,Xn be independent and identically distributed random variables having c. d. f. F and expected value μ. Such a sequence of random variables is said to constitute a sample from the distribution F.”
 
Physics news on Phys.org
  • #2
The language is a little faulty. He seems to be using random variable to mean both the variable and a sample of the variable. For example the outcome of a coin toss is a random variable with two possible outcomes. Once you toss a coin you are taking a sample.
 
  • #3
If you have a random variable X and you consider the process of taking n independent samples of it (as opposed to taking one definite sample with fixed numerical values) then you have a random vector. Random vectors are sometimes called random variables (just as in vector math, a "variable" could represent a vector.)

When you think about statistics, it is a mistake to try to think about a typical problem in terms of a single random variable. Anything that is a function of a random variable is another random variable to worry about. Thus a random sample of n independent realizations of the random variable X is a random vector. The mean of this sample is another random variable. The variance of the sample is another random variable. The unbiased estimator of the sample variance is another random variable. A statistic, such as the t-statistic is a function of the sample values, so it becomes another random variable. (This is particularly confusing if you are used to thinking of "a statistic" as definite numerical value, such as 78.3 years. In statistics, a statistic is any function of the sample values and hence it is a random variable. Adding further to the confusion is the fact that terms like "sample variance" and "sample mean" are sometimes used to refer to specific numerical results instead of functions of random variables. )
 
  • #4
Thanks folks.

Ok. Now I get it. But I have a follow up question. The term IID- independent and identically distributed - a commonly used qualifier for most random variables.

If I am taking samples of a random variable, I am picking points from one distribution. Why do I have to use the qualifier 'identically distributed'?

Also, how does a non-identically distributed sample of a random variable look?
 
  • #5
musicgold said:
I think I understand the concept of random variable (for example, the number of heads when three coins are tossed together or the temperature of a place at 6.00am every morning).

I am, however, confused as I have seen some material which refers even the values taken by a random variable (or instances) as random variables. For example, consider the text from a PowerPoint presentation. The second part, for example, calls the members of a sample as independent variables.

How should I think about this?

Thanks.Text from a presentation.
“Suppose we are given a random variable X with some unknown probability distribution. We want to estimate the basic parameters of this distribution, like the expectation of X and the variance of X. The usual way to do this is to observe n independent variables all with the same distribution as X”

“Let X1,X2,…,Xn be independent and identically distributed random variables having c. d. f. F and expected value μ. Such a sequence of random variables is said to constitute a sample from the distribution F.”

When one has a large number of independent samples of a distribution then the average of the sample is a sample from a nearly normally distributed random variable - assuming that the original distribution has finite variance. Further the mean of the nearly normal distribution
is the mean of the original distribution and its variance converges to zero for increasingly large samples. This is the Central Limit Theorem

http://en.wikipedia.org/wiki/Central_limit_theorem#Classical_CLTClassical statistics is possible because large averages are close to normally distributed even when the original distribution is unknown. All you need is finite variance and mean. So these two parameters can be accurately estimated from the averages of independent samples because normal distributions are well understood.

The crux of this line of reasoning is the idea of independent sampling. Independent samples from a single random variable are equivalent to samples of different random variables with the same distribution. Independence means that nothing is changed by the sampling process. The samples are the same as if they were taken from different random variables.

It is not unfair to say that the thing that differentiates probability theory from analysis is the idea of independence. This in my opinion is what you should try to understand. Then everything else will make sense.
 
Last edited:
  • #6
A sequence of random variables, X1, X2, ... is identically distributed if all have the same distribution function. Then they all have the same set of possible values. For example, X1 is the first flip of a coin, X2 is the second flip, etc... You could have a bunch of random variables all with different distributions. For example, X1 is the flip of a coin, X2 is the roll of a die, etc... They are different random variables. But if you repeatedly sample the same random variable then your results are necessarily identically distributed. i.e. X1 is one point drawn from the given distribution, X2 is another point drawn from the same distribution, etc. I think it may be semantics. I know about probability but a statistician may use different language. I think mathman said it well above.
 
  • #7
Thanks folks.

But I have a follow up question. The term IID- independent and identically distributed - a commonly used qualifier for most random variables.

If I am taking samples of a random variable, I am picking points from one distribution. Why do I have to use the qualifier 'identically distributed'?

Also, how does a non-identically distributed sample of a random variable look?
 
  • #8
If you are picking points "from one distribution" then they are "identically distributed".

As for "non-identically distributed", consider this- flip a coin and roll a single die. The set of "outcomes" is
(H, 1), (H, 2), (H, 3), (H, 4), (H, 5), (H, 6), (T, 1), (T, 2), (T, 3), (T, 4), (T, 5), (T, 6).
 
  • #9
musicgold said:
Why do I have to use the qualifier 'identically distributed'

Because if the distributions are not identical the steps that follow would not be valid.

musicgold said:
how does a non-identically distributed sample of a random variable look

The face value of a playing card drawn from a pack without replacement is a simple example.

(Crossposted with HallsofIvy, but I think my example is better ;) so I will let it stand)
 
  • #10
MrAnchovy said:
The face value of a playing card drawn from a pack without replacement is a simple example.

... but of course that is an example of a dependent (and non-identically distributed) random variable so perhaps HallsofIvy's example is better after all.
 
  • #11
As for "non-identically distributed", consider this- flip a coin and roll a single die. The set of "outcomes" is (H, 1), (H, 2), (H, 3), (H, 4), (H, 5), (H, 6), (T, 1), (T, 2), (T, 3), (T, 4), (T, 5), (T, 6).

I am not sure, but it seems the distribution of these outcomes will be identical. They will have a uniform distribution with 8.3% chance for each outcome. I actually ran a simulation and I was getting an almost uniform distribution. Is that right?
 
  • #12
Hey musicgold.

The easiest way to think about a random variable in any context is basically that you have a function that maps a value to a corresponding probability. It's not the most rigorous way of defining it, but for most purposes this is what a random variable is.

You basically associate an event with a probability. In a continuous distribution your event is actually a non-zero simple interval (i.e. [a,b] where a < b) and with discrete portions you associate one particular value with a probability.

If the random variable follows all the Kolmogorov Axioms (all probabilities add up to 1, all are greater than or equal to 0, etc), then you have a random variable.
 
  • #13
I think you are confused now musicgold. You are correct about the 1/12 probability but you are now talking about the joint distribution of 2 random variables. When I mentioned this above I was talking about a sequence of random variables.
Flip a coin repeatedly. X1 is the first flip of a coin. X2 is the second, X3 is the third, etc. You generate a sequence {X1,X2,X3,...} of random variables. Since each random variable is drawn from the same distribution, P(H)=P(T)=1/2, then those random variables X1, X2,...are identically distributed. Now generate a sequence {X1,X2,X3,...} where, for example, X1 is the result of flipping a coin, X2 is the result of rolling one die, X3 is the result of spinning the wheel of fortune, X4 is the result of... You now have a sequence of random variables {X1,X2,X3,...} which are not identically distributed because they are not all drawn from the same distribution.
Don't confuse this with what you did above. The events which all have equal probability 1/12 are the event "one flip of a coin and one roll of a die". So the two actions together are your event, thus the ordered pair to describe one event. Now those events (flip, roll) are identically distributed.
 

1. What is a random variable?

A random variable is a variable whose value is determined by the outcome of a random event. It can take on different values with different probabilities.

2. What is the difference between a discrete and continuous random variable?

A discrete random variable can only take on a finite or countably infinite number of values, while a continuous random variable can take on any value within a given range.

3. How are probability distributions related to random variables?

A probability distribution is a function that assigns probabilities to different values of a random variable. It describes the likelihood of different outcomes occurring.

4. What is the expected value of a random variable?

The expected value of a random variable is the weighted average of all possible values that the variable can take on, with the weights being the probabilities of each value.

5. How are random variables used in statistical analysis?

Random variables are used in statistical analysis to model and understand uncertainty in data. They can help determine the likelihood of certain outcomes and make predictions based on probability distributions.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
30
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
301
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
335
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
400
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
889
  • Set Theory, Logic, Probability, Statistics
Replies
10
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
2K
Back
Top