
#1
Oct1310, 08:12 AM

P: 523

The following interesting result popped up in an old probability textbook (without proof or citations) and I'm curious to know how it can be derived.
The term [tex]A_{ji}(\lambda)/\lambda I_mP[/tex] would hint that [tex](\lambda I_mP)^{1}[/tex] is involved, so I suspect it's done by finding the Laplace transform of [tex]e^{tP}[/tex] and somehow extracting the nth term of the Taylor series. Is this on the right track and if so, how would it be done? In particular, how do they turn the expression into a sum of derivatives at the eigenvalues? 



#2
Oct1410, 09:42 PM

P: 313

Have you managed to derive the formula yet? If I had time I'd love to have a go...
Anyway, I did a little searching and found http://crypto.mat.sbg.ac.at/~ste/diss/node12.html There he cites pg 16 of V. Romanovsky, Discrete Markov Chains. but I couldn't get a copy of it. 



#3
Oct1510, 05:23 PM

P: 523

The formula was in Sveshnikov's Problems in Probability (as a "basic formula", not an actual problem). The result might be discussed in Gantmakher's Theory of Matrices or Horn & Johnson's Matrix Analysis, possibly even for more general matrix functions, though I don't have copies of these to check. 


Register to reply 
Related Discussions  
Powers of the Matrix M^n  Precalculus Mathematics Homework  14  
Matrix powers  Calculus & Beyond Homework  6  
Matrix powers  Precalculus Mathematics Homework  6  
Matrix Powers  Calculus & Beyond Homework  1  
matrix powers  Set Theory, Logic, Probability, Statistics  12 