Maximum Entropy in Gaussian Setting

In summary, the conversation discusses the distribution of random variables that maximizes the differential entropy in a set of inequalities. The Normal distribution is known to maximize the differential entropy and the conversation focuses on whether having assumptions (i, ii, iii) would still lead to a Gaussian distribution for (b*X1+X2+N3|V) when maximizing the set of inequalities. The end goal is to show that a jointly Gaussian distribution of U, V, X1, and X2 maximizes the set of inequalities.
  • #1
ferreat
2
0
Hello,
I have a doubt about the distribution of random variables that maximize the differential entropy in a set of inequalities. It is well known that the Normal distribution maximizes the differential entropy. I have the following set of inequalities:

T1 < I(V;Y1|U)
T2 < I(U;Y2)
T3 < I(X1,X2;Y3|V)
T4 < I(X1,X2;Y3)

where, Y1=X1+N1, Y2=a*X1+N2, Y3=b*X1+X2+N3. N1,N2,N3 are Gaussian ~ N(0,1). The lower case a and b are positive real numbers a < b. U, V, X1 and X2 are random variables. I want to maximize that set of inequalities. I know the following:

(i) From T4, h(Y3) maximum is when Y3 is Gaussian then X1 and X2 are Gaussian.

(ii) From T2 we maximize it by having h(Y2) or h(a*X1+N2) maximum. From this by the Entropy Power Inequality (EPI) we bound -h(a*X1+N2|U) and have X1|U Gaussian.

(iii) From T1 we maximize it by having h(Y1|U) or h(X1+N1|U) maximum which we can do as -h(a*X1+N2|U) in the part ii can be bounded having Y1 Gaussian (satisfying the maximum entropy theorem).

The Question:

From T3, can I assume that jointly Gaussian distribution will maximize h(Y3|V) or h(b*X1+X2+N3) having the assumptions i,ii,iii ?

My aim is to show that jointly Gaussian distribution of U, V, X1 and X2 maximizes the set of inequalities. I hope anyone can help me out with this.
 
Physics news on Phys.org
  • #2
I don't understand your notation. Is "T4" a number or is it only a designator for an expression? Are the vertical bars "|" to denote absolute values? - conditional probabilities?
 
  • #3
Thanks Stephen for your reply. Basically the set of inequalities is what is known in Information Theory as a rate region:
T1 < I(V;Y1|U)
T2 < I(U;Y2)
T3 < I(X1,X2;Y3|V)
T1+T2+T3 < I(X1,X2;Y3).
T1, T2 adn T3 are the rates obtained when transmitting messages 1, 2 and 3. The I's are Mutual Informations and the vertical bars "|" indicate conditioning. For instance I(V;Y1|U) = h(Y1|U) - h(Y1|U,V) where h(x) is the differential entropy.
My question is basically is after having assumed h(X1+N1|U) maximum implies (X1+N1|U) Gaussian in (iii), could I assume h(b*X1+X2+N3|V) maximum implies (b*X1+X2+N3|V) Gaussian? I know if I hadn't assumed (i,ii,iii) this last question would be affirmative, but having (i,ii,iii) is it still true?
 

What is maximum entropy in the context of a Gaussian setting?

Maximum entropy in a Gaussian setting refers to the principle of choosing a probability distribution that maximizes the entropy, or randomness, subject to certain constraints. In other words, it is a way to find the most likely distribution of data based on limited information.

Why is maximum entropy important in a Gaussian setting?

Maximum entropy is important in a Gaussian setting because it allows us to make the most accurate predictions about a system given limited information. It can also help us identify the most likely underlying distribution of data, which can be useful in various applications such as signal processing and machine learning.

What are the main assumptions of maximum entropy in a Gaussian setting?

The main assumptions of maximum entropy in a Gaussian setting include the assumption of normality, meaning that the data follows a Gaussian distribution, and the assumption of independence, meaning that the data points are not influenced by each other. Additionally, maximum entropy assumes that the constraints are linear and that the constraints are in the form of expected values.

How is maximum entropy in a Gaussian setting different from other maximum entropy principles?

Maximum entropy in a Gaussian setting differs from other maximum entropy principles in that it is specifically applied to data that follows a Gaussian distribution. Other maximum entropy principles may be applied to different types of data and have different assumptions and constraints.

What are some real-world applications of maximum entropy in a Gaussian setting?

Some real-world applications of maximum entropy in a Gaussian setting include image and signal processing, machine learning, and statistical physics. It can also be used in various fields such as economics, biology, and ecology to model and predict complex systems.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • General Math
Replies
3
Views
875
Replies
5
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
8K
Replies
13
Views
1K
  • Calculus and Beyond Homework Help
Replies
16
Views
2K
  • Calculus and Beyond Homework Help
Replies
1
Views
972
  • Topology and Analysis
Replies
9
Views
2K
  • Programming and Computer Science
Replies
7
Views
3K
  • Calculus and Beyond Homework Help
Replies
3
Views
2K
Back
Top