How to convert a univariate distribution to bivariate distribution

Click For Summary
To convert a MARSHALL-OLKIN exponential Weibull distribution from univariate to bivariate, the joint PDF of two independent random variables can be obtained by multiplying their individual PDFs. The discussion emphasizes the need to define the relationship between the two variables, suggesting that one might consider a bivariate normal distribution for dependent variables. Key parameters such as lambda, beta, and alpha are identified as essential for defining the distribution, with alpha being noted as the location or shift parameter. The notation I(0, ∞) indicates the support of the distribution, which is relevant for defining the valid range of the random variable. Understanding these concepts is crucial for successfully creating the bivariate CDF from the given univariate distribution.
samrah
Messages
3
Reaction score
0
hi
i have MARSHAL-OLKIN exponential weibull distribution which have the following cdf and pdf..
how could i convert it to bivariate distribution?
thanks
 
Physics news on Phys.org
The joint PDF of two independent random variables would just be the product of the two individual PDFs.
 
  • Like
Likes samrah
FactChecker said:
The joint PDF of two independent random variables would just be the product of the two individual PDFs.
thanks.. i have cdf and pdf (latex codes are given)..how can i make bivariate cdf from this univariate? what is shift parameter in this case? kindly guide me

thanks alot

\begin{align} \label{A5}
F(x) &= \frac{1- e^{-\left(\lambda\, x+\beta\, x^k\right)}}{1-(1-\alpha)\,
e^{-\left(\lambda\, x+\beta\, x^k\right)}}\cdot \boldsymbol I_{(0, \infty)}(x)\,,\\ \label{A6}
f(x) &= \frac{\alpha\,\left(\lambda+ \beta \,k\,x^{k-1}\right)\, e^{-\lambda\,x-\beta\,x^k}}
{\left(1-(1-\alpha )\, e^{-\left(\lambda\, x+\beta\, x^k\right)}\right)^2} \cdot \boldsymbol I_{(0, \infty)}(x)\,,
\qquad \lambda, \beta, k, \alpha > 0 \,;
\end{align}
 
You need to decide how you want the two variables to be related. It's easy to determine the joint PDF if they are independent -- just multiply them. Alternatively, you may want to mimic the bivariate normal, where the X and Y variables are not independent, but the vector distance from a center point (mean) is in the same class of distributions of the original distribution. For that, you may want to look at how the two coordinate vectors are related in a bivariate normal (see http://mathworld.wolfram.com/BivariateNormalDistribution.html )

PS. My description of the bivariate normal in terms of a vector "distance" is a loose description, not to be taken literally.
 
  • Like
Likes samrah
samrah said:
thanks.. i have cdf and pdf (latex codes are given)..how can i make bivariate cdf from this univariate? what is shift parameter in this case? kindly guide me

thanks alot

\begin{align} \label{A5}
F(x) &= \frac{1- e^{-\left(\lambda\, x+\beta\, x^k\right)}}{1-(1-\alpha)\,
e^{-\left(\lambda\, x+\beta\, x^k\right)}}\cdot \boldsymbol I_{(0, \infty)}(x)\,,\\ \label{A6}
f(x) &= \frac{\alpha\,\left(\lambda+ \beta \,k\,x^{k-1}\right)\, e^{-\lambda\,x-\beta\,x^k}}
{\left(1-(1-\alpha )\, e^{-\left(\lambda\, x+\beta\, x^k\right)}\right)^2} \cdot \boldsymbol I_{(0, \infty)}(x)\,,
\qquad \lambda, \beta, k, \alpha > 0 \,;
\end{align}
what is lambda, Beta, alpha, what is I(0, inf) is k the other variable with x. Is this an advanced question, what's I(0,inf)?
 
  • Like
Likes samrah
Josh S Thompson said:
what is lambda, Beta, alpha, what is I(0, inf) is k the other variable with x. Is this an advanced question, what's I(0,inf)?
alpha,beta ,lambda and gamma are just parameters... i have to fix three and vary one of them (the location or shift parameter)... i found that alpha is the location parameter...now i have to write these in product form with same this pdf ,beta,lambda and gamma will be fixed and alpha will vary.i don't know about (0,inf)
 
The standard _A " operator" maps a Null Hypothesis Ho into a decision set { Do not reject:=1 and reject :=0}. In this sense ( HA)_A , makes no sense. Since H0, HA aren't exhaustive, can we find an alternative operator, _A' , so that ( H_A)_A' makes sense? Isn't Pearson Neyman related to this? Hope I'm making sense. Edit: I was motivated by a superficial similarity of the idea with double transposition of matrices M, with ## (M^{T})^{T}=M##, and just wanted to see if it made sense to talk...

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
Replies
2
Views
2K
  • · Replies 30 ·
2
Replies
30
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
1K
Replies
32
Views
9K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K