Why Is the Conditional Entropy of g(X) Given X Zero?

  • Context: Graduate 
  • Thread starter Thread starter PhillipKP
  • Start date Start date
  • Tags Tags
    Conditional Entropy
Click For Summary
SUMMARY

The conditional entropy of a function of a random variable X, denoted as H(g(X)|X), is definitively zero. This is established because, for any specific value of X, the function g(X) is constant, leading to no uncertainty in the outcome. As explained in "Elements of Information Theory," knowing X allows for precise inference of g(X) with probability 1, resulting in H(g(X)|X) = 0. This conclusion clarifies the conceptual understanding of conditional entropy.

PREREQUISITES
  • Understanding of random variables and their properties
  • Familiarity with the concept of entropy in information theory
  • Knowledge of functions and their mappings in probability
  • Basic grasp of conditional probability
NEXT STEPS
  • Study the concept of entropy in detail using "Elements of Information Theory" by Thomas M. Cover and Joy A. Thomas
  • Explore conditional probability and its applications in statistics
  • Learn about the implications of deterministic functions in probability theory
  • Investigate other properties of entropy, such as joint and mutual entropy
USEFUL FOR

Students and professionals in statistics, data science, and information theory who seek to deepen their understanding of entropy and its applications in probabilistic models.

PhillipKP
Messages
65
Reaction score
0
Hi

I'm trying to convince myself that the conditional entropy of a function of random variable X given X is 0.

H(g(X)|X)=0

The book I'm reading (Elements of Information Theory) says that since for any particular value of X, g(X) is fixed, thus the statement is true. But I don't understand why this makes the conditional entropy 0.

Obviously I don't understand conceptually what conditional entropy really is...

Can anyone please provide some "gentle" insight into this?
 
Physics news on Phys.org
If we know X=x, then we can precisely infer g(X) = g(x) with probability 1, and anything else with probability 0. Hence there is no uncertainty about g(X) once we know X. Written in symbols, the previous sentence is H(g(X) | X) = 0.
 
Ah that makes very good conceptual sense. Thank you for the short but insightful explanation.

Cheers

Phillip
 

Similar threads

Replies
2
Views
2K
Replies
1
Views
1K
  • · Replies 11 ·
Replies
11
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 26 ·
Replies
26
Views
1K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
14
Views
2K
Replies
1
Views
2K