Why Is the Characteristic Function of a Random Variable Complex-Valued?

  • Thread starter Thread starter cappadonza
  • Start date Start date
  • Tags Tags
    Characteristic
AI Thread Summary
The characteristic function of a random variable is complex-valued and is considered "well-behaved," unlike distribution functions, which may not always have a density. Its definition is given by φ(t) = E(e^{itX}), and it is used primarily to derive the distribution function of the sum of independent random variables, where the characteristic function of the sum equals the product of the individual functions. Unlike moment-generating functions, which may not exist for all distributions, every probability distribution has a characteristic function that uniquely determines it. This makes characteristic functions essential in probability theory. Understanding their derivation and applications can be further explored through various resources, including recommended presentations.
cappadonza
Messages
26
Reaction score
0
so a charateristic function of a RV is complex valued funtion. from my lecture, the distribution funtion of a Random variable is not always "well behaved", may not have a density etc. A charateristic function on the other had is "well behave".
What i don't understand is, is that the only reason we use it ?
how is it actually derived, why does it have to be complex valued .
this is the definition I'm given \phi(t) = \mathbb{E}(e^{itX})
how is this actually derived, is somewhere where i can find the proof ?
 
Physics news on Phys.org
You need to clarify your question. Definitions aren't derived.

As far as usage, the simplest example is deriving the distribution function a sum of independent random variables. The characteristic function of the sum is the product of the characteristic functions of the individual variables.
 
if you are studying characteristic functions you should have already seen moment-generating functions.

moment generating functions can be used to uniquely identify the form of a distribution IF (big if) the moments of the distribution satisfy a very strict requirement. that doesn't happen all the time.

even worse, not every probability distribution has a moment-generating function: think of a t-distribution with 5 degrees of freedom: no moments of order 4 or greater, so no moment generating function.

however, EVERY distribution has a characteristic function, and every distribution is uniquely determined by the form of that function. that is one (not the only) reason for their importance.
 
I found this presentation to be helpful in understanding the derivation and applications of the characteristic function.

http://www.sjsu.edu/faculty/watkins/charact.htm
 
I was reading a Bachelor thesis on Peano Arithmetic (PA). PA has the following axioms (not including the induction schema): $$\begin{align} & (A1) ~~~~ \forall x \neg (x + 1 = 0) \nonumber \\ & (A2) ~~~~ \forall xy (x + 1 =y + 1 \to x = y) \nonumber \\ & (A3) ~~~~ \forall x (x + 0 = x) \nonumber \\ & (A4) ~~~~ \forall xy (x + (y +1) = (x + y ) + 1) \nonumber \\ & (A5) ~~~~ \forall x (x \cdot 0 = 0) \nonumber \\ & (A6) ~~~~ \forall xy (x \cdot (y + 1) = (x \cdot y) + x) \nonumber...
Back
Top