What is Heaviside Function? Tutorial & Application

Click For Summary
The Heaviside function, when multiplied by a random variable, does not qualify as a probability density function (pdf) because it does not satisfy the normalization condition required for pdfs. To create a valid pdf using the Heaviside function, one must normalize it by calculating the total area under the curve and dividing by that value. This process is particularly relevant in Bayesian statistics, where a prior and likelihood are combined to form a posterior. If the prior is not normalized, the posterior must be adjusted accordingly to ensure proper probability representation. Understanding these concepts is essential for applying the Heaviside function in statistical contexts.
zli034
Messages
106
Reaction score
0
Hi all:

The Heaviside function multiples a random varaible, is that a probability density function?

This is my first time knowing about Heaviside, any tutorial and application of it?
 
Physics news on Phys.org
The Heaviside function multiplied by a random variable (that may only have a real value) is not a probability density function. For a probability density function P of a random variable which may only have real values,

\int _{\infty}^{\infty} P(x)dx = 1

whereas if H(x) is the Heaviside step function, and a is a real number

\int _{\infty}^{\infty} aH(x)dx \neq 1
 
zli034 said:
Hi all:

The Heaviside function multiples a random varaible, is that a probability density function?

This is my first time knowing about Heaviside, any tutorial and application of it?

You need to normalize your pdf.

This kind of thing happens in Bayesian statistics. What happens is you have a prior and a likelihood and then you create the posterior from the prior and the likelihood.

One thing you should realize though is that if you want to use the Heaviside function as some kind of prior, you will need to normalize the posterior. What will happen is that for a probability to be normalized, if you have a likelihood that is normalized and a prior that is not (this will definitely nearly always be the case), then you need to find the total area of the un-normalized pdf and divide your posterior definition by this area.

In terms of interpreting what you are doing, it is basically the equivalent of defining a uniform prior in some (possibly collections of) interval(s).

So yeah, figuring out the total area of our new pdf (find the integral over the whole real line for your new pdf), and divide your pdf by that number and you will have a proper pdf.
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 0 ·
Replies
0
Views
900
  • · Replies 3 ·
Replies
3
Views
5K
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
1
Views
2K
Replies
2
Views
5K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 2 ·
Replies
2
Views
2K