Relating Entropies of Probability Spaces

  • Thread starter Juanriq
  • Start date
In summary, the relation between the entropy of T and the entropy of T_i can be expressed as a weighted sum, where the weights are determined by the measure of the subsets of X.
  • #1
Juanriq
42
0
Ahoy hoy, trying to see if I am on the right track with a question.

Homework Statement

Let [itex] T: (X, \mathcal{B}, m)[/itex] to itself be an mpt of a probability space. Suppose [itex] X = X_1 \cup X_2 [/itex] where [itex] X_1 \cap X_1 = \emptyset [/itex] and [itex] m(X_i) > 0 [/itex] and also [itex] T^{-1}X_i = X_i [/itex]. Let [itex] T: (X_i, \mathcal{B}|X_i, m_i) [/itex] to itself where [itex] T_i = T|X_i [/itex] and [latex[ m_i = m|X_i [/itex] normalized. What is the relation between [itex] h_m(T) [/itex] and [itex] h_{m_i}}(T_i)[/itex]?


Homework Equations





The Attempt at a Solution

I know [itex] m(X) = 1 [/itex], so say [itex] m(X_1) = p [/itex] and [itex] m(X_2) = 1-p [/itex]. This would mean that [itex] m_1 = \frac{m|X_1}{m(X_1)} = \frac{m|X_1}{p} [/itex] and also [itex] m_2 = \frac{m|X_2}{m(X_2)} = \frac{m|X_1}{1-p} [/itex]. Now, [itex]h_m(T) = -m(X_1) \log m(X_2) - m(X_2) \log m(X_2) [/itex] and so [itex] h_{m_i}}(T_i) = -m_1(X_1) \log m_1(X_2) - m_2(X_2) \log m_2(X_2)[/itex].



Is this right so far? Thanks!
 
Physics news on Phys.org
  • #2
Yes, you are on the right track. The relation between h_m(T) and h_{m_i}(T_i) can be expressed as follows: h_m(T) = p h_{m_1}(T_1) + (1-p) h_{m_2}(T_2).
 

1. What is entropy in the context of probability spaces?

Entropy is a measure of the uncertainty or randomness of a system. In the context of probability spaces, it represents the amount of information needed to describe the outcomes of a random experiment. It is typically denoted by the letter H and is measured in bits or nats.

2. How is entropy related to the size of a probability space?

The entropy of a probability space is directly related to the size of the space, or the number of possible outcomes. A larger probability space with more possible outcomes will have a higher entropy, meaning it is more uncertain and has a greater amount of information needed to describe it.

3. Can entropy be negative?

No, in the context of probability spaces, entropy cannot be negative. This is because it is a measure of uncertainty, and uncertainty cannot be less than zero. Entropy can only be zero or positive.

4. How is entropy affected by the probability of outcomes?

The probability of outcomes in a probability space directly affects the entropy. If all outcomes have equal probabilities, the entropy will be at its maximum. As the probabilities of outcomes become more unequal, the entropy will decrease, and if one outcome has a probability of 1, the entropy will be 0.

5. What is the relationship between entropy and information gain?

Information gain is a concept used in decision trees to measure how much a given feature reduces uncertainty in a dataset. It is directly related to entropy, as the higher the information gain, the lower the entropy and vice versa. This means that features with higher information gain are more useful in reducing uncertainty and making predictions in a probability space.

Similar threads

  • Calculus and Beyond Homework Help
Replies
9
Views
792
  • Calculus and Beyond Homework Help
Replies
4
Views
236
Replies
3
Views
723
  • Calculus and Beyond Homework Help
Replies
1
Views
711
  • Calculus and Beyond Homework Help
Replies
9
Views
1K
Replies
3
Views
1K
  • Calculus and Beyond Homework Help
Replies
5
Views
1K
  • Quantum Physics
Replies
1
Views
1K
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Math POTW for Graduate Students
Replies
2
Views
462
Back
Top