Understanding Multivariate Probability Distributions: A University-Level Guide

  • Thread starter JaysFan31
  • Start date
  • Tags
    Probability
In summary, the conversation discusses a homework problem regarding multivariate probability distributions. The problem involves two individuals tossing a coin until the first head appears, with the goal of finding the expectation values and variances of various variables related to this experiment. It is suggested that the expectation values for Y1 and Y2 are 1/p, and that the joint probability density function is the product of the individual densities. The conversation also mentions finding E((Y1-Y2)^2) and V(Y1-Y2), and suggests using E[(Y1-Y2)²] = E[Y1²]-2E[Y1Y2]+E[Y2²] to find these values.
  • #1
JaysFan31
I'm in a university-level probability course and I'm stuck on a homework problem. Currently we are working on multivarate probability distributions.

I have this problem:
In an earlier exercise, we considered two individuals who each tossed a coin until the first head appeared. Let Y1 and Y2 denote the number of times that persons A and B toss the coin, respectively. If heads occurs with probability p and tails occurs with probability q=1-p, it is reasonable to conclude that Y1 and Y2 are independent and that each has a geometric distribution with parameter p. Consider Y1-Y2, the difference in the number of tosses required by the two individuals.

I need to find a lot of information about the experiment, like E(Y1), E(Y2), E(Y1-Y2), etc. I think I can determine all of the other information if I can just find what E(Y1) and E(Y2) are. This problem is different from all the others we've done since it does not give the joint probability density function. Thus, I have no idea where to start. What is the joint probability density function and what are E(Y1) and E(Y2)?
I would guess that E(Y1) and E(Y2) simply equal (1/p) [from geometric distribution], but these seem horribly wrong.

Any help would be greatly appreciated.
 
Physics news on Phys.org
  • #2
Since Y1 and Y2 are geometric of parameter p, their expectation value is 1/p, there's no question about that; it just follows from algebra. Why do you say it looks wrong?

And haven't you seen that if two random variables are independant, then their joint probability density is just the product of their respective densities.
 
  • #3
Thanks for the response.

If this is true, then does E(Y1-Y2)=0?
I'm using the notion that E(Y1-Y2)=E(Y1)-E(Y2).
 
  • #4
Ok. I think I figured everything out.
I just need help with one thing. I have found a lot of information,
E(Y1), E(Y2), E(Y1-Y2), E[(Y1)^2], E[(Y2)^2], and E(Y1Y2).

However I need to find E((Y1-Y2)^2) and V(Y1-Y2).
I would use V(Y1-Y2)= E((Y1-Y2)^2)-(E(Y1-Y2))^2, but I don't know the first two.

Any suggestions please?
 
  • #5
Maybe this is not the easiest way but,

E[(Y1-Y2)²] = E[Y1²-2Y1Y2+Y2²]=E[Y1²]-2E[Y1Y2]+E[Y2²]

You can easily find the laws of these 3 new variables. For instance, the density of Y1² is the derivative of the repartition function of Z=Y1²:

[tex]F_Z(z)=P[Y1^2\leq z]=P[Y_1\leq \sqrt{z}]\mathbb{I}_{[0,\infty)}(z)=\int_0^{\sqrt{z}}f_{Y_1}(y)dy[/tex]

etc.
 
Last edited:

FAQ: Understanding Multivariate Probability Distributions: A University-Level Guide

What is a multivariate probability distribution?

A multivariate probability distribution is a mathematical function that describes the probabilities of different outcomes occurring simultaneously in a system with multiple random variables. It is commonly used to model complex systems where multiple variables influence the outcome.

What are the key features of a multivariate probability distribution?

The key features of a multivariate probability distribution are the number of variables it considers, the type of distribution it follows (e.g. normal, binomial, etc.), the relationship between the variables, and the range of values each variable can take.

What are the benefits of understanding multivariate probability distributions?

Understanding multivariate probability distributions can help in analyzing and predicting outcomes in complex systems, such as stock markets, weather patterns, or biological processes. It also allows for better decision-making and risk management in various fields, including finance, economics, and engineering.

What methods can be used to analyze multivariate probability distributions?

There are several methods used to analyze multivariate probability distributions, including the use of joint probability distributions, covariance matrices, and correlation coefficients. Other techniques, such as principal component analysis and cluster analysis, can also be applied to identify patterns and relationships between variables.

How can I learn more about multivariate probability distributions?

There are many resources available for learning more about multivariate probability distributions, including textbooks, online courses, and academic articles. It is also helpful to have a strong understanding of basic probability and statistics concepts before delving into multivariate distributions.

Similar threads

Replies
6
Views
1K
Replies
1
Views
1K
Replies
6
Views
7K
Replies
1
Views
2K
Replies
13
Views
2K
Replies
4
Views
2K
Back
Top