Joint probability of partitioned vectors

In summary, Bishop's introduction of partitioned vectors is simple and straightforward, but later in the chapter he discusses conditional and marginal multivariate Gaussian distributions which are more complex.
  • #1
scinoob
17
0
Hi everybody, I apologize if this question is too basic but I did 1 hour of solid Google searching and couldn't find an answer and I'm stuck.

I'm reading Bishop's Pattern Recognition and Machine Learning and in the second chapter he introduces partitioned vectors. Say, if X is a D-dimensional vector, it can be partitioned like:

X = [Xa, Xb] where Xa is the first M components of X and Xb is the remaining D-M components of X.

I have no problem with this simple concept. Later in the same chapter he talks about conditional and marginal multivariate Gaussian distributions and he uses the notation p(Xa, Xb). I'm trying to understand how certain integrals involving this notation are expanded but I'm actually struggling to understand even this expression. It seems to suggest that we're denoting the joint probability of the components of Xa and the components of Xb. But those are just the components of X anyway!

What is the difference between P(Xa, Xb) and P(X)?

It will be more helpful for me if we considered a more concrete example. Say, X = [X1, X2, X3, X4] and Xa = [X1, X2] while Xb = [X3, X4]. Now, the joint probability P(X) would simply be P(X1, X2, X3, X4), right? What is P(Xa, Xb) in this case?

Thanks in advance!
 
Physics news on Phys.org
  • #2
My guess: in later chapters he discusses Xa and Xb as separate entities.
 
  • #3
scinoob said:
Hi everybody, I apologize if this question is too basic but I did 1 hour of solid Google searching and couldn't find an answer and I'm stuck.

I'm reading Bishop's Pattern Recognition and Machine Learning and in the second chapter he introduces partitioned vectors. Say, if X is a D-dimensional vector, it can be partitioned like:

X = [Xa, Xb] where Xa is the first M components of X and Xb is the remaining D-M components of X.

I have no problem with this simple concept. Later in the same chapter he talks about conditional and marginal multivariate Gaussian distributions and he uses the notation p(Xa, Xb). I'm trying to understand how certain integrals involving this notation are expanded but I'm actually struggling to understand even this expression. It seems to suggest that we're denoting the joint probability of the components of Xa and the components of Xb. But those are just the components of X anyway!

What is the difference between P(Xa, Xb) and P(X)?

It will be more helpful for me if we considered a more concrete example. Say, X = [X1, X2, X3, X4] and Xa = [X1, X2] while Xb = [X3, X4]. Now, the joint probability P(X) would simply be P(X1, X2, X3, X4), right? What is P(Xa, Xb) in this case?

Thanks in advance!
There is no difference between p(Xa, Xb) and p(X), because X = (Xa, Xb). It starts to get interesting when we introduce marginal and conditional probability densities e.g. p(Xa | Xb) and p(Xb). Obviously, p(Xa, Xb) = p(X) = p(Xa | Xb) . p(Xb)

You get p(Xb) from p(X) by integrating out over Xa.

NB use small "p" for probability density. Use capital "P" for probability.
 

What is the joint probability of partitioned vectors?

The joint probability of partitioned vectors is a measure of the likelihood that two or more events will occur simultaneously. In other words, it is the probability that the values of two or more vectors will fall within a specific range at the same time.

How is the joint probability of partitioned vectors calculated?

The joint probability of partitioned vectors is calculated by multiplying the individual probabilities of each event. This can be represented mathematically as P(A and B) = P(A) * P(B).

What is the difference between joint probability and conditional probability?

Joint probability measures the likelihood of two events occurring simultaneously, while conditional probability measures the likelihood of an event occurring given that another event has already occurred. In other words, conditional probability takes into account additional information, while joint probability does not.

What is the relationship between joint probability and independence?

If two events are independent, the joint probability of those events will be equal to the product of their individual probabilities. This means that the occurrence of one event does not affect the likelihood of the other event occurring.

How is the joint probability of partitioned vectors used in statistics and data analysis?

The joint probability of partitioned vectors is used to calculate the probability of complex events and to understand the relationships between multiple variables. It is an important concept in probability theory and is often used in statistical analysis and data modeling to make predictions and draw conclusions from data.

Similar threads

  • Calculus and Beyond Homework Help
Replies
9
Views
2K
  • Differential Geometry
Replies
2
Views
584
  • MATLAB, Maple, Mathematica, LaTeX
Replies
1
Views
1K
Replies
1
Views
3K
  • Introductory Physics Homework Help
Replies
3
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
2K
  • Special and General Relativity
Replies
1
Views
611
  • Special and General Relativity
Replies
20
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
5
Views
3K
  • General Math
Replies
5
Views
1K
Back
Top