Maximum Entropy Distribution Given Marginals

This allows us to find the maximum entropy distribution that is consistent with the given marginal probability density functions q1(x) and q2(y).
  • #1
Legendre
62
0
Hi all,

I'm a pure mathematician (Graph Theory) who has to go through a Physics paper, and I am having trouble getting through a part of it. Maybe you guys can point me in the right direction:

Let P(x,y) be a joint distribution function.

Let H = - [itex]\Sigma[/itex]x,y P(x,y) log P(x,y), which is the Entropy.

We are given the marginal probability density functions q1(x) and q2(y).

To get the maximum entropy distribution consistent with the marginals, we solve this problem:

max H

subject to constraints,

[itex]\Sigma[/itex]y P(x,y) = q1(x)
[itex]\Sigma[/itex]x P(x,y) = q2(y)

---------------------

The Lagrangian is,

L = - [itex]\Sigma[/itex] x,y (P(x,y) log P(x,y)) + [tex]\lambda[/tex]1 [itex]\Sigma[/itex]y (P(x,y) - q1(x)) + [tex]\lambda[/tex]2 [itex]\Sigma[/itex]x (P(x,y) - q2(y))

To find the stationary point of L, we differentiate it with respect to a particular P(x',y') while keeping the other variables constant.

I know how to do this for the first sum, which gives the solution -(1 + log P(x',y')).

---------------------

How do we differentiate the second and third sum with respect to P(x',y')?

The paper gave the solution to differentiating L with respect to P(x',y') as:

-(1 + log P(x',y')) + [tex]\lambda[/tex]1 + [tex]\lambda[/tex]2


Thanks!
 
Science news on Phys.org
  • #2
</code>The basic idea behind Lagrangian optimization is to maximize a certain objective function while also satisfying certain constraints. In this case, the objective function is the entropy H, and the constraints are that the marginal probability density functions q1(x) and q2(y) must be satisfied. To maximize H, we construct the Lagrangian L, which is a combination of the objective function and the constraints. We then find the stationary point of L by differentiating it with respect to a particular P(x',y') while keeping the other variables constant.For the second and third sums in the Lagrangian, we can use the chain rule to differentiate them with respect to P(x',y'). Specifically, we have:d/dP(x',y') [∑y P(x,y) - q1(x)] = 1d/dP(x',y') [∑x P(x,y) - q2(y)] = 1Therefore, differentiating L with respect to P(x',y') gives us the expression:-(1 + log P(x',y')) + λ1 + λ2as the solution.
 

Related to Maximum Entropy Distribution Given Marginals

What is the Maximum Entropy Distribution Given Marginals?

The Maximum Entropy Distribution Given Marginals is a probability distribution that has the maximum entropy or uncertainty while satisfying certain constraints or known information about the system. It is a way of estimating a probability distribution when there is limited information available.

How is the Maximum Entropy Distribution Given Marginals calculated?

The Maximum Entropy Distribution Given Marginals is typically calculated using the principle of maximum entropy, which states that the distribution with the highest entropy is the most unbiased or objective distribution, given the available information. This involves using Lagrange multipliers to incorporate the constraints into the calculation.

What constraints are used in the calculation of Maximum Entropy Distribution Given Marginals?

The constraints used in the calculation of Maximum Entropy Distribution Given Marginals depend on the specific problem at hand. Typically, these constraints are in the form of known marginal distribution values, which represent the probabilities of certain events occurring. Other constraints, such as mean or variance, can also be used depending on the problem.

What are the applications of Maximum Entropy Distribution Given Marginals?

The Maximum Entropy Distribution Given Marginals has many applications in fields such as physics, statistics, and machine learning. It is commonly used in image and signal processing, natural language processing, and data analysis. It is also used in the study of complex systems, such as in economics and social sciences.

What are the limitations of Maximum Entropy Distribution Given Marginals?

One limitation of the Maximum Entropy Distribution Given Marginals is that it requires the use of constraints, which may not always accurately represent the true distribution. In addition, the calculation can become computationally intensive for large datasets or with a large number of constraints. Furthermore, the interpretation of the resulting distribution may be difficult due to its high entropy or uncertainty.

Similar threads

  • Calculus and Beyond Homework Help
Replies
7
Views
199
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
18
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
173
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
6
Views
1K
Replies
1
Views
951
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
310
Back
Top