Why Is the Characteristic Function of a Random Variable Complex-Valued?

  • Thread starter Thread starter cappadonza
  • Start date Start date
  • Tags Tags
    Characteristic
Click For Summary
The characteristic function of a random variable is complex-valued and is considered "well-behaved," unlike distribution functions, which may not always have a density. Its definition is given by φ(t) = E(e^{itX}), and it is used primarily to derive the distribution function of the sum of independent random variables, where the characteristic function of the sum equals the product of the individual functions. Unlike moment-generating functions, which may not exist for all distributions, every probability distribution has a characteristic function that uniquely determines it. This makes characteristic functions essential in probability theory. Understanding their derivation and applications can be further explored through various resources, including recommended presentations.
cappadonza
Messages
26
Reaction score
0
so a charateristic function of a RV is complex valued funtion. from my lecture, the distribution funtion of a Random variable is not always "well behaved", may not have a density etc. A charateristic function on the other had is "well behave".
What i don't understand is, is that the only reason we use it ?
how is it actually derived, why does it have to be complex valued .
this is the definition I'm given \phi(t) = \mathbb{E}(e^{itX})
how is this actually derived, is somewhere where i can find the proof ?
 
Physics news on Phys.org
You need to clarify your question. Definitions aren't derived.

As far as usage, the simplest example is deriving the distribution function a sum of independent random variables. The characteristic function of the sum is the product of the characteristic functions of the individual variables.
 
if you are studying characteristic functions you should have already seen moment-generating functions.

moment generating functions can be used to uniquely identify the form of a distribution IF (big if) the moments of the distribution satisfy a very strict requirement. that doesn't happen all the time.

even worse, not every probability distribution has a moment-generating function: think of a t-distribution with 5 degrees of freedom: no moments of order 4 or greater, so no moment generating function.

however, EVERY distribution has a characteristic function, and every distribution is uniquely determined by the form of that function. that is one (not the only) reason for their importance.
 
I found this presentation to be helpful in understanding the derivation and applications of the characteristic function.

http://www.sjsu.edu/faculty/watkins/charact.htm
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 36 ·
2
Replies
36
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K
  • · Replies 3 ·
Replies
3
Views
2K