The following interesting result popped up in an old probability textbook (without proof or citations) and I'm curious to know how it can be derived.

It seems to work for any matrix (not just Markov transition probabilities) and as far as I can tell it's not related to the Perron formula of number theory.

The term [tex]A_{ji}(\lambda)/|\lambda I_m-P|[/tex] would hint that [tex](\lambda I_m-P)^{-1}[/tex] is involved, so I suspect it's done by finding the Laplace transform of [tex]e^{tP}[/tex] and somehow extracting the nth term of the Taylor series. Is this on the right track and if so, how would it be done? In particular, how do they turn the expression into a sum of derivatives at the eigenvalues?

Thanks, me neither. I haven't checked the details but think it can be done by finding the residue of [tex]\lambda^n(\lambda I-P)^{-1}[/tex] at [tex]\lambda=\infty[/tex] via Laurent series and partial fraction expansion.

The formula was in Sveshnikov's Problems in Probability (as a "basic formula", not an actual problem). The result might be discussed in Gantmakher's Theory of Matrices or Horn & Johnson's Matrix Analysis, possibly even for more general matrix functions, though I don't have copies of these to check.