## |X| is a random variable

I have a question, we know that a random variable X is a function that maps a real number to an event in the Sample Space. But, if X is a random variable, then the absolute value of X, say |X| is a random variable too? Why? I am almost sure that it is not because we can not tell wheter an outcome was positive or negative.
Any seggestions?

 PhysOrg.com science news on PhysOrg.com >> Hong Kong launches first electric taxis>> Morocco to harness the wind in energy hunt>> Galaxy's Ring of Fire

Recognitions:
Gold Member
Staff Emeritus
 I have a question, we know that a random variable X is a function that maps a real number to an event in the Sample Space.
Did you mean that it maps an event to a real number?

 I am almost sure that it is not because we can not tell wheter an outcome was positive or negative.
What are you using as the set of outcomes? R? Why do you think we can't tell if an element of R is positive or negative?

In any case, absolute value is just a function. |X| is defined in the same way that you would define X², (1/X), or (X+3).

 Yes, I mean I forgot to be more explicit about it. Lets say that a random variable X is a real-valued function defined over a sample space S. Consider the oposite, if |X| is a random variable, then X is a random variable too?

Recognitions:
Gold Member
Staff Emeritus

## |X| is a random variable

You cannot reconstruct X from |X|, if that's what you mean: there are lots of random variables all with the same absolute value.

In other words, you can have |Y| = |X|, and yet have X and Y be different random variables.

For example, X and |X| are (usually) two different random variables that have the same absolute value.

 Good, so X and |X| are random variables. Let me put it in this way: 1. If X is a random variable, then |X| is also a random variable too. 2. If |X| is a random variable, then X is a random variable too. First case, we say that X and |X| are two different random variables. But I do not see any condition for |X| to be a random variable given that X is a random variable, so I think this statement is not true at all. Second, since we can not map back a real number to the sample space, say using |X|, then the fact that |X| is a random variable does not imply that X is a random variable too, so this statement is false. What do you think?

Recognitions:
Gold Member
Staff Emeritus
 First case, we say that X and |X| are two different random variables. But I do not see any condition for |X| to be a random variable given that X is a random variable, so I think this statement is not true at all.
I repeat, the same method you use to construct random variables such as X², 1/X, and (X+3) also works to construct |X|.

 Second, since we can not map back a real number to the sample space, say using |X|, then the fact that |X| is a random variable does not imply that X is a random variable too, so this statement is false.
What kind of thing is X? It makes absolutely no sense to speak about |X| unless you've given a meaning to X.

 Good point, I think you are right. Well, in this case let see that for the sample space S={1,2,3}, X is a random variable such that X(w) = {0.2, 0.4, 0.4}, for every w from S. How about that? For |X|, say |X|^-1, inverse image of |X|, how can we map back an outcome to the sample space?

Recognitions:
Gold Member
Staff Emeritus
 Well, in this case let see that for the sample space S={1,2,3}, X is a random variable such that X(w) = {0.2, 0.4, 0.4}, for every w from S.
In this particular example, |X| = X. And, I presume you meant to say that:
X(1) = 0.2
X(2) = 0.4
X(3) = 0.4

 For |X|, say |X|^-1, inverse image of |X|, how can we map back an outcome to the sample space?
I'm not even sure what you're asking here. Maybe explicitly typing things would help.

I'm going to define f(x) := |x| for clarity..

X is a random variable. It maps events to probabilities.
f is a function. It maps outcomes to outcomes.
f is a function. It maps random variables to random variables.

(Yes, this is an abuse of notation -- we can "lift" ordinary functions of the outcomes to become functions on random variables. But since they're so closely related, we typically use the same notation for both the ordinary function and its "lift")

f(X) is, therefore, a random variable. It maps events to probabilities.

So it doesn't even make sense to ask what the "inverse" of f(X) would do to an outcome.

Now, the ordinary absolute value function f does map outcomes to outcomes, and it makes sense to ask about its "inverse", or more rigorously, to ask about the inverse image of a set of outcomes.

By definition:

$$f^{-1}(E) = \{ x \in S \, | \, f(x) \in E \}$$

so, if S is the set {1, 2, 3}, then f^(-1) (E) = E.

 To find the distribution function of y=|X| can we do P(|X|<=y)? But then we find P(X<-y or X
 Recognitions: Homework Help "But then we find P(X<-y or X

Recognitions:
 1. If X is a random variable, then |X| is also a random variable too.
Not necessarily true. Suppose X=1 with probability p and X=-1 with probability 1-p. Then |X|=1 with probability 1, which is not really random.

 A "random" variable need not be random. I roll a 1-sided die and let X be the result. $P(X=1) = 1.$ Not very random. The only thing that is required is that X map S to $\mathbb{R}$. The term "random variable" was badly chosen since it is neither random nor a variable. --Elucidus

 Quote by Elucidus A "random" variable need not be random. I roll a 1-sided die and let X be the result. $P(X=1) = 1.$ Not very random. The only thing that is required is that X map S to $\mathbb{R}$. The term "random variable" was badly chosen since it is neither random nor a variable. --Elucidus
The proper use of a probability is under conditions of uncertainty. A random variable assigns a probability over the closed interval [0,1] to an unknown outcome as an abstract measure based on a given set of assumptions regarding the possibility of that outcome . Once an outcome is known, probability is always measure 1 or 0.

However, strictly speaking, a probability is not a scalar measure of uncertainty. This can be measured by U = 4(p)(1-p) where uncertainty is maximal at p=0.5. (The multiplier '4' simply scales the measure to the interval [0,1].)

People have been arguing about the 'reality' of probabilities for over 200 years, but we're using them more than ever we because we have so much uncertainty. Any process for which their is any uncertainty as to the outcome is random.

 To clarify my earlier comment about X being neither random nor a variable. X is a function from S to $\mathbb{R}$. Its value for any given outcome is prescribed based on the function's rule. These values are not randomly assigned. Any randomness is inherent in the underlying experiment, not in X. The name was coined long ago by people who lacked as much clarity as in modern times. The weight of tradition often forstalls changing terminology. It is true that "random variables" are used to analyse experiments that have uncertainty about them, but the experiments do not necessarilly have to be uncertain. If I toss a two-headed coin, the probability of Tails is 0, while Heads is 1 (no randomness involved). --Elucidus

 Quote by Elucidus To clarify my earlier comment about X being neither random nor a variable. X is a function from S to $\mathbb{R}$.It's true that "random variables" are used to analyse experiments that have uncertainty about them, but the experiments do not necessarilly have to be uncertain. If I toss a two-headed coin, the probability of Tails is 0, while Heads is 1 (no randomness involved).
Do the heads on each side of the coin face in the same or opposite directions?