Why Does the Wiener Process Use Normal Distribution Multiplication for Modeling?

  • Thread starter Thread starter Polymath89
  • Start date Start date
  • Tags Tags
    Process Properties
Polymath89
Messages
27
Reaction score
0
I have a simple question about the intuition behind property 1 of a Wiener Process. It says in my textbook that the change in a variable z that follows a Wiener Process is:

δz=ε\sqrt{δt}

where ε is a random drawing from a \Phi(0,1)

Now I think \sqrt{δt} is supposed to be the standard deviation of a random variable which follows a normal distribution with a standard deviation of 1 during one year.

My question now is, if δ^1/2 is the standard deviation of a normally distributed random variable, why is the random drawing from another normal distribution necessary or basically why do I have to multiply ε with δt^1/2?
 
Physics news on Phys.org
Polymath89 said:
I have a simple question about the intuition behind property 1 of a Wiener Process. It says in my textbook that the change in a variable z that follows a Wiener Process is:

δz=ε\sqrt{δt}

where ε is a random drawing from a \Phi(0,1)

Now I think \sqrt{δt} is supposed to be the standard deviation of a random variable which follows a normal distribution with a standard deviation of 1 during one year.

My question now is, if δ^1/2 is the standard deviation of a normally distributed random variable, why is the random drawing from another normal distribution necessary or basically why do I have to multiply ε with δt^1/2?

If you don't multiply your delta z will be too big. Suppose t is really small then certainly delta z does not have distribution N(0,1).
 
Thanks for your answer.

You're right in saying that δz would be really big, if the time change is small, but that's not a really satisfying answer for me. I want to know why it's normally distributed, not why it's not normally distributed if you have a different form.
 
Polymath89 said:
Thanks for your answer.

You're right in saying that δz would be really big, if the time change is small, but that's not a really satisfying answer for me. I want to know why it's normally distributed, not why it's not normally distributed if you have a different form.

why what is normally distributed ?
 
Polymath89 said:
I have a simple question about the intuition behind property 1 of a Wiener Process. It says in my textbook that the change in a variable z that follows a Wiener Process is:

δz=ε\sqrt{δt}

where ε is a random drawing from a \Phi(0,1)

Now I think \sqrt{δt} is supposed to be the standard deviation of a random variable which follows a normal distribution with a standard deviation of 1 during one year.

My question now is, if δ^1/2 is the standard deviation of a normally distributed random variable, why is the random drawing from another normal distribution necessary or basically why do I have to multiply ε with δt^1/2?

If you take Wiener process and break it up into any amount of intervals eg 2,3,1000,1000000 etc. Then withen each interval you have in effect another Wiener process. In this sense it is like a fractal. No matter how small the interval, you always have another Wiener process. So one thing you can ask your self is how would you simulate Wiener process on a computer.
 
Last edited:
Polymath89 said:
Thanks for your answer.

You're right in saying that δz would be really big, if the time change is small, but that's not a really satisfying answer for me. I want to know why it's normally distributed, not why it's not normally distributed if you have a different form.

One way to look at it is that it's definition of a Wiener process!

Accepting that, the question becomes why does this nesting of normal random variables work to define some sort of stochastic process. A property of normal random variables is that the sum of normal random variables is another random variable. Suppose you are analyzing a phenomena (or simulating it with a computer program) and you observed the process at discrete time intervals, say t = 10, t = 20, t = 30, etc. You find that the increments of the process are indepdendent and normally distributed. Then you daydream about refining your measurments so the measurements are taken at times t = 1, 2, ... Wouldn't it be nice if the increments at these small time intervals were also normally distributed. Is that even mathematically possible? Yes! The increments at smaller times could be independent normal random variables that add up to be the larger increments. (If you had found that the increments at larger times were, for example, uniform random variables you couldn't claim that they came from adding smaller uniform random variables. Independent uniform random variables don't sum to a uniform random variable. In fact, a large number of indpendent uniform random variables sums to something that is approximately a normal distribution. )

From the this point of view the intutiive idea of a Wiener process is that it is a phenomenon whose increments can be analyzed as independent normal random variables at increasingly small time intervals. Of course, if the variance of the process measured at t = 10, 20, 30,... is \sigma_{10}, you can't say the variance of the process at t = 1, 2, 3,... is one tenth of that variance. To have the variance at 10 small increments produce the correct variance at one large increment, you must consider how the variances of a sum of independent random variables add. That's essentially where the square root comes in.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top