I On injectivity of two-sided Laplace transform

psie
Messages
315
Reaction score
40
TL;DR Summary
I am studying a theorem in probability that the Laplace transform of a (nonnegative) random variable determines the law of that random variable, which is equivalent to its injectivity. The book only treats Laplace transforms of nonnegative random variables, but since the moment generating function (mgf) is in fact the two sided Laplace transform, and since mgfs exist for arbitrary sign random variables, I wonder how to extend the result to say the two-sided Laplace transform is also injective.
I will omit the theorem and its proof here, since it would mean a lot of typing. But the relevant part of the proof of the theorem is that we are considering the set ##H## of functions consisting of ##\psi_\lambda(x)=e^{-\lambda x}## for ##x\geq 0## and ##\lambda\geq 0##. We extend the functions by continuity so they are defined on all of ##[0,\infty]##, namely we put ##\psi_\lambda(\infty)=0## if ##\lambda>0## and ##\psi_0(\infty)=1##. Having a compact set ##[0,\infty]##, we can apply the Stone-Weierstrass theorem to say that ##H## is dense in ##C([0,\infty])##.

I have seen another proof of the injectivity of the Laplace transform (not related to probability), but that also uses the Stone-Weierstrass theorem. So how would one go about showing the two-sided Laplace transform is injective? I guess one would like to consider ##[-\infty,\infty]## and apply the Stone-Weierstrass theorem also, but I'm unsure how one would modify ##H##.
 
Physics news on Phys.org
I hope this can help.
 

Attachments

Gavran said:
I hope this can help.
Thank you. This was helpful. In proving the main theorem, Theorem 2.2, they rely on "Curtiss' theorem". I have looked at Curtiss paper but I am unsure which theorem Chareka means. Do you know which theorem Chareka is referring to?
 
psie said:
Thank you. This was helpful. In proving the main theorem, Theorem 2.2, they rely on "Curtiss' theorem". I have looked at Curtiss paper but I am unsure which theorem Chareka means. Do you know which theorem Chareka is referring to?
If ## \{M_n(t)\} ## is a sequence of moment generating functions corresponding to a sequence of distribution functions ## \{F_n(x)\} ##, then convergence of ## \{M_n(t)\} ## to a moment generating function ## M(t) ## on ## (-\delta,\delta) ## implies that ## \{F_n(x)\} ## converges weakly to ## F(x) ##, where ## F(x) ## is the distribution function with moment generating function ## M(t) ##.
 
Gavran said:
If ## \{M_n(t)\} ## is a sequence of moment generating functions corresponding to a sequence of distribution functions ## \{F_n(x)\} ##, then convergence of ## \{M_n(t)\} ## to a moment generating function ## M(t) ## on ## (-\delta,\delta) ## implies that ## \{F_n(x)\} ## converges weakly to ## F(x) ##, where ## F(x) ## is the distribution function with moment generating function ## M(t) ##.
Ok, but how does it follow from this that $$L_{u^\ast}(s')=L_{v^\ast}(s')\implies u^\ast=v^\ast,$$where ##u^\ast,v^\ast## are probability density functions (and ##L_u## the bilateral Laplace transform of ##u##)? This is what Chareka concludes in Theorem 2.2 due to Curtiss theorem. This is the part I have a hard time understanding.
 
psie said:
Ok, but how does it follow from this that $$L_{u^\ast}(s')=L_{v^\ast}(s')\implies u^\ast=v^\ast,$$where ##u^\ast,v^\ast## are probability density functions (and ##L_u## the bilateral Laplace transform of ##u##)? This is what Chareka concludes in Theorem 2.2 due to Curtiss theorem. This is the part I have a hard time understanding.
I do not know because I have not seen it.
Probably they used the equation $$ M(s)=E(e^{sX})=E(e^{-(-s)X})=B(-s) $$ where ## B(s) ## is a bilateral Laplace transform.
 
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Thread 'Detail of Diagonalization Lemma'
The following is more or less taken from page 6 of C. Smorynski's "Self-Reference and Modal Logic". (Springer, 1985) (I couldn't get raised brackets to indicate codification (Gödel numbering), so I use a box. The overline is assigning a name. The detail I would like clarification on is in the second step in the last line, where we have an m-overlined, and we substitute the expression for m. Are we saying that the name of a coded term is the same as the coded term? Thanks in advance.
Back
Top