Finding Class Posterior Probabilities from Linear Discriminant Function

  • Thread starter sensitive
  • Start date
  • Tags
    Linear
In summary, the conversation discusses finding class posterior probabilities from a linear discriminant function. The solution involves using the expression yi(x) = ln{p(x|ci)p(Ci)}, and through manipulation, it is determined that p(x|C1) = exp(2x1) and p(x|C2) = exp(-2x2) would satisfy the condition p(x|C1)/p(x|C2) = exp(2x1 + 2x2).
  • #1
sensitive
35
0
Hi

I am doing this exercise (2 class problem with 2-dimensional features) and I have solved the linear discriminant function which turns out be y1(x) - y2(x) = 2x1 +2x2

I am having difficulty in finding the class posterior probabilities frm the linear discriminant function obtained.

But I tried this way and got stuck.

We know that yi(x) = ln{p(x|ci)p(Ci)}

from the linear discriminant function obtained

y1(x) = ln{p(x|C1)p(C1)} = 2x1 + 2x2 + y2(x)
ln{p(x|C1)p(C1)} = 2x1 + 2x2 + ln{p(x|C2)p(C2)}
ln[{p(x|C1)p(C1)} - {p(x|C2)p(C2)} = 2x1 + 2x2

Using exponential for both side we get
p(x|C1)p(C1)/p(x|C2)p(C2) = exp(2x1 + 2x2)

p(C1) and p(C2) are constatnt so we can neglect us giving the following

p(x|C1)/p(x|C2)= exp(2x1 + 2x2)

From this point I am not sure how to separate both posterior probabilities.


please help...Thank you
 
Last edited:
Physics news on Phys.org
  • #2
As long as p(x|C1)/p(x|C2)= exp(2x1 + 2x2) is the only condition that needs to hold, p(x|C1) = exp(2x1), p(x|C2)= exp(-2x2) would satisfy it.
 
  • #3
for sharing your progress and question with us. It seems like you have made some good progress so far. To find the class posterior probabilities from the linear discriminant function, you can use Bayes' theorem which states:

p(ci|x) = p(x|ci)p(ci)/p(x)

Where p(ci|x) is the posterior probability of class ci given the feature vector x, p(x|ci) is the likelihood of x given class ci, p(ci) is the prior probability of class ci, and p(x) is the evidence or marginal probability of x.

In your case, you have already calculated the linear discriminant function y1(x) - y2(x) = 2x1 + 2x2. Now, you can use this function to calculate the likelihoods of both classes by substituting the values of x1 and x2. For example, for class C1, the likelihood would be p(x|C1) = exp(2x1 + 2x2).

Next, you need to calculate the prior probabilities for both classes. Since this is a two-class problem, the prior probabilities would be equal, i.e. p(C1) = p(C2) = 0.5.

Finally, to calculate the evidence or marginal probability of x, you can use the law of total probability:

p(x) = p(x|C1)p(C1) + p(x|C2)p(C2)

Now, you have all the components to calculate the class posterior probabilities using Bayes' theorem. For example, the posterior probability of class C1 given the feature vector x would be:

p(C1|x) = p(x|C1)p(C1)/p(x) = [exp(2x1 + 2x2) * 0.5]/[p(x|C1)*0.5 + p(x|C2)*0.5]

Similarly, you can calculate the posterior probability of class C2 given x.

I hope this helps you in finding the class posterior probabilities from the linear discriminant function. Keep up the good work!
 

Related to Finding Class Posterior Probabilities from Linear Discriminant Function

1. What is a linear discriminant function?

A linear discriminant function is a mathematical formula used to classify data into distinct categories based on a set of input variables. It calculates a linear combination of the input variables and compares it to a threshold to determine the category of the data.

2. How do you find class posterior probabilities from a linear discriminant function?

To find class posterior probabilities from a linear discriminant function, you first need to calculate the discriminant score for each class using the input variables. Then, use these scores to calculate the probability of each class using the Bayes' theorem. The class with the highest probability is the most likely classification for the data.

3. What is the importance of finding class posterior probabilities?

Finding class posterior probabilities is important because it provides a measure of uncertainty in the classification of data. It allows for a more nuanced understanding of the data and can help identify areas where the classification may be less clear. Additionally, it can be used in decision making and risk analysis.

4. Can you use a linear discriminant function for non-linearly separable data?

No, a linear discriminant function is only suitable for linearly separable data. For non-linearly separable data, non-linear discriminant functions such as quadratic discriminant analysis or support vector machines may be more appropriate.

5. How do you evaluate the performance of a linear discriminant function?

The performance of a linear discriminant function can be evaluated by calculating the overall accuracy of the classification, as well as metrics such as precision, recall, and F1-score for each class. Additionally, cross-validation techniques can be used to assess the generalizability of the model to new data.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
893
  • Set Theory, Logic, Probability, Statistics
Replies
12
Views
2K
Replies
2
Views
8K
  • Calculus and Beyond Homework Help
Replies
1
Views
545
  • Calculus and Beyond Homework Help
Replies
2
Views
305
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
2K
Replies
3
Views
2K
  • Precalculus Mathematics Homework Help
Replies
10
Views
2K
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
Back
Top