Integral in Gardiner's book on stochastic methods

eoghan
Messages
201
Reaction score
7
Dear all,
I have troubles in one proof of the book Handbook of stochastic methods by Gardiner. In the paragraph 3.7.3 he writes this integral
\sum_i\int d\vec x \frac{\partial}{\partial x_i}[-A_ip_1\log(p_1/p_2)]
where p_1 and p_2 are two solutions of the Chapman-Kolmogorov equation and \vec A is a function of \vec x. Then Gardiner says, suppose that we take p_2 as a stationary distribution p_s(\vec x) which is nonzero everywhere, except at infinity, where it and its first derivative vanish. The integral can be integrated to give surface terms which vanish at infinity.
I don't know how to prove this! I used the Gauss theorem to obtain:
\sum_i\int_D d\vec x \frac{\partial}{\partial x_i}[-A_ip_1\log(p_1/p_2)]=<br /> -\int_{D} d\vec x \nabla[\vec A p_1\log(p_1/p_2)]=<br /> -\int_{\partial D} dS \:\:\hat n\cdot[\vec A p_1\log(p_1/p_2)]

and this is a surface term, where the surface extends to infinity. Now I should conclude that p_1\log(p_1/p_2) is zero at infinity, but I don't know how to proof that. I mean, I only know that p_2 is zero at infinity and this would make the integral to diverge! Maybe I can say that since p_1 it's solution to the Chapman-Kolmogorov equation, it is itself a distribution and so also p_1 vanishes at infinity, but I'm not sure about this.
 
eoghan said:
Dear all,
I have troubles in one proof of the book Handbook of stochastic methods by Gardiner.

Not that I can answer your question, but which edition of Handbook of Stochastic Methods is involved? There is a "section" 3.7.3 in the second edition, but I don't see the equation you mention.
 
Stephen Tashi said:
Not that I can answer your question, but which edition of Handbook of Stochastic Methods is involved? There is a "section" 3.7.3 in the second edition, but I don't see the equation you mention.
Hi Stephen!
The book is the third edition. Chapter 3 = Markov Processes, section 3.7=Stationary and Homogeneous Markov Processes, subsection 3.7.3=Approach to a Stationary Process
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top