# Wiener Process Properties

1. Aug 2, 2012

### Polymath89

I have a simple question about the intuition behind property 1 of a Wiener Process. It says in my textbook that the change in a variable z that follows a Wiener Process is:

$$δz=ε\sqrt{δt}$$

where ε is a random drawing from a $$\Phi(0,1)$$

Now I think $$\sqrt{δt}$$ is supposed to be the standard deviation of a random variable which follows a normal distribution with a standard deviation of 1 during one year.

My question now is, if δ^1/2 is the standard deviation of a normally distributed random variable, why is the random drawing from another normal distribution necessary or basically why do I have to multiply ε with δt^1/2?

2. Aug 2, 2012

### steviekm3

If you don't multiply your delta z will be too big. Suppose t is really small then certainly delta z does not have distribution N(0,1).

3. Aug 2, 2012

### Polymath89

You're right in saying that δz would be really big, if the time change is small, but that's not a really satisfying answer for me. I want to know why it's normally distributed, not why it's not normally distributed if you have a different form.

4. Aug 2, 2012

### steviekm3

why what is normally distributed ?

5. Aug 2, 2012

### steviekm3

If you take Wiener process and break it up into any amount of intervals eg 2,3,1000,1000000 etc. Then withen each interval you have in effect another Wiener process. In this sense it is like a fractal. No matter how small the interval, you always have another Wiener process. So one thing you can ask your self is how would you simulate Wiener process on a computer.

Last edited: Aug 2, 2012
6. Aug 5, 2012

### Stephen Tashi

One way to look at it is that it's definition of a Wiener process!

Accepting that, the question becomes why does this nesting of normal random variables work to define some sort of stochastic process. A property of normal random variables is that the sum of normal random variables is another random variable. Suppose you are analyzing a phenomena (or simulating it with a computer program) and you observed the process at discrete time intervals, say t = 10, t = 20, t = 30, etc. You find that the increments of the process are indepdendent and normally distributed. Then you daydream about refining your measurments so the measurements are taken at times t = 1, 2, .... Wouldn't it be nice if the increments at these small time intervals were also normally distributed. Is that even mathematically possible? Yes! The increments at smaller times could be independent normal random variables that add up to be the larger increments. (If you had found that the increments at larger times were, for example, uniform random variables you couldn't claim that they came from adding smaller uniform random variables. Independent uniform random variables don't sum to a uniform random variable. In fact, a large number of indpendent uniform random variables sums to something that is approximately a normal distribution. )

From the this point of view the intutiive idea of a Wiener process is that it is a phenomenon whose increments can be analyzed as independent normal random variables at increasingly small time intervals. Of course, if the variance of the process measured at t = 10, 20, 30,... is $\sigma_{10}$, you can't say the variance of the process at t = 1, 2, 3,... is one tenth of that variance. To have the variance at 10 small increments produce the correct variance at one large increment, you must consider how the variances of a sum of independent random variables add. That's essentially where the square root comes in.