# Mean and Variance of Random Walk

## Main Question or Discussion Point

I'm reading a stat textbook and it says the following:

Let a discrete-time random walk be defined by Xt = Xt-1 + et, where the et's are i.i.d. normal(0,σ2). Then for t≧1,

(i) E(Xt) = 0
(ii) Var(Xt) = t σ2

However, the textbook doesn't have a lot of justifications for these results and I don't understand why (i) and (ii) are necessarily true here.
For example, E(Xt) = E(Xt-1 +et) = E(Xt-1) + E(et), but how can you calculate E(Xt-1)?

Can someone please explain in more detail?
Thanks a lot!

Related Set Theory, Logic, Probability, Statistics News on Phys.org
chiro
I'm reading a stat textbook and it says the following:

Let a discrete-time random walk be defined by Xt = Xt-1 + et, where the et's are i.i.d. normal(0,σ2). Then for t≧1,

(i) E(Xt) = 0
(ii) Var(Xt) = t σ2

However, the textbook doesn't have a lot of justifications for these results and I don't understand why (i) and (ii) are necessarily true here.
For example, E(Xt) = E(Xt-1 +et) = E(Xt-1) + E(et), but how can you calculate E(Xt-1)?

Can someone please explain in more detail?
Thanks a lot!
Hey kingwinner.

For those results, just use the linear results for expectation and variance. You know that one increment of the random walk has a certain distribution, and you also know that each increment is independent of each other.

Expectation should be straight forward, but for variance you have to use the result that each increment is independent which makes the covariance term zero.

With this information you should be able to derive the results. (If you are wondering about the linear results you need, a statistics book should provide them with a proof, but I can give you hints if you need them).

Thanks for the help! However, my problem is not with the basic linearity results. I understand these.

Here is my problem...
E(Xt)
= E(Xt-1 + et)
= E(Xt-2 +et-1 + et)
= E(Xt-3 +et-2 +et-1 + et)
=...??????

But I have no idea where to stop. No matter where I stop, I would end up with an E(Xj) for some j, and I don't see any way of calculating E(Xj).

Thanks!

chiro
Thanks for the help! However, my problem is not with the basic linearity results. I understand these.

Here is my problem...
E(Xt)
= E(Xt-1 + et)
= E(Xt-2 +et-1 + et)
= E(Xt-3 +et-2 +et-1 + et)
=...??????

But I have no idea where to stop. No matter where I stop, I would end up with an E(Xj) for some j, and I don't see any way of calculating E(Xj).

Thanks!
You fix the number of increments to be a constant, and then derive the results using this fact.

From this you will have a finite number of "e" terms and you use that to get your identities.

Just to let you know, there are subjects of study which deal with convergence of sets of "many" random variables, but for this kind of problem, you deal with a finite number of increments to get your results.

Last edited:
Are you familiar with the idea of http://en.wikipedia.org/wiki/Mathematical_induction" [Broken]? You could use this if you want a formal proof. The idea is simple. If you want to prove that some statement is true for all the positive integers, you prove two things:

(1) It's true for 1.
(2) If it's true for n, then it's also true for n+1.

Once you've shown these two things, you've shown it's true for all positive integers.

So, in your case, show that (1) E(X1) = 0, and (2) If E(Xt) = 0, then E(Xt+1) = 0.

Last edited by a moderator:
OK, but the problem is how we can show that E(X1) = 0?? This is where I'm stuck...

X1 = e1.

X1 = e1.
hmm...why? How can we prove this?

hmm...why? How can we prove this?
That's part of the definition of the random walk. Apparently, if the quote you gave is complete, that isn't mentioned right there, but it should be somewhere in the book.

Certainly, if it is NOT defined to be 0 at time zero (or at some time), the inference E(X) = 0 is not justified. Rather, you would have for $t \ge 1$ $\mathbb{E}[X_t]=X_0$.

EDIT: That wasn't very clear, What I meant is, $\mathbb{E}[X_0]=0$ is part of the definition of the random walk.

The statement of the theorem itself has no indication about how Xo is defined.

But in the brief justifications below it says "WLOG, we can ASSUME X0=0 and so Xt=e1+e2+...+et. The results follow."

So they are assuming X0=0, but why is this WLOG?? If X0=100, or if X0~Normal(mean=100, variance=1000), then the results of the theorem won't be true, right??

And also, how do we know that X0=0??? Is this always the case?

Thanks!

The statement of the theorem itself has no indication about how Xo is defined.

But in the brief justifications below it says "WLOG, we can ASSUME X0=0 and so Xt=e1+e2+...+et. The results follow."

So they are assuming X0=0, but why is this WLOG?? If X0=100, or if X0~Normal(mean=100, variance=1000), then the results of the theorem won't be true, right??

And also, how do we know that X0=0??? Is this always the case?
Well, that is completely new to me. I have never seen a definition of the discrete random walk that didn't define it as starting at zero, I see no justification whatever for assuming it if it is not so defined, and, as you say, the conclusion E[X] = 0 is wrong if X_0 is not 0.

You could justify assuming WLOG that X0=0 if you sought to prove that E[X] = X_0, because you can always define Y_t = X_t - X_0. But I would have to say this is just wrong.

chiro