1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Entropy (Information Theory Question)

  1. Jan 24, 2015 #1
    1. The problem statement, all variables and given/known data
    Let ##X## and ##Y## be two independent integer-valued random variables. Let ##X## be uniformly distributed over ##\left\{1,2,...,8\right\}##, and let ##\text{Pr}\left\{Y=k\right\} =2^{-k},~k=1,2,3,...##
    (a) Find ##H(X)##.
    (b) Find ##H(Y)##.
    (c) Find ##H(X+Y,X-Y)##.

    2. Relevant equations
    I am confused about part (c). I have found the answers to (a) and (b), they are obviously 3 bits and 2 bits, respectively. However, the solution I get for (c) does not match the answer. The answer to (c) is apparently 5 bits.

    3. The attempt at a solution
    I argue that ##Z=X+Y## and ##W=X-Y##. Thus, I create the vectors ##\mathbf{u} = [Z,W]^T## and ##\mathbf{v}=[X,Y]^T## and write them as a linear transformation of each other as

    ##\mathbf{u}=\begin{bmatrix}1&1 \\ 1&-1 \end{bmatrix}\mathbf{v}=\mathbf{M}\mathbf{v}##.

    Therefore, ##H(\mathbf{u})=H(X+Y,X-Y)=H(\mathbf{v})+\log_2\lvert\text{det}\left(\mathbf{M}\right)\rvert##. I then have

    ##\log_2\lvert\text{det}\left(\mathbf{M}\right)\rvert=1## bit
    ##H(\mathbf{v})=H(X)+H(Y|X)=H(X)+H(Y)=5## bits (since ##Y## is independent of ##X##).

    This leaves me with the answer for (c) to be 6 bits.

    Edit: Unless the formula I am using with log-det is only for continuous and not discrete distributions.
     
    Last edited: Jan 24, 2015
  2. jcsd
  3. Jan 25, 2015 #2

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    I'm not familiar with this log2|det(M)| formula. Can you post a link?
    It feels wrong. If Y = 2X, would H(Y) be different from H(X)?
    It's sort of obvious that since X and Y are independent H(X+Y,X-Y) = H(X,Y) = H(X)+H(Y).
     
  4. Jan 25, 2015 #3
    I think it was incorrect usage. It doesn't apply here since these are pmfs and not pdfs. I got it from a differential entropy wiki page.

    Anyways, I don't disagree that if Y is a scale of X that the uncertainty in the RV will be the same. The probabilities are the same regardless of the values they take on the sample space.

    I guess I didn't see it as obvious here since H(X+Y,X-Y) seems more complicated than it is. But the only way I guess I can understand this is that if given we know that Z is the sum and W is the difference, we can always determine X and Y. And so if X and Y are independent then the entropy is just the sum.

    Do you know if, in general, when there is an affine relationship between RVs that the entropy is the same? It makes sense conceptually but there aren't really any theorems out there for it that I could find in my book (Cover-Thomas).
     
  5. Jan 25, 2015 #4

    haruspex

    User Avatar
    Science Advisor
    Homework Helper
    Gold Member
    2016 Award

    For discrete RVs, I would say they'd be the same given any bidirectional deterministic relationship. If Y = f(X) is a bijection, P(Y=f(x)) = P(X=x).
     
  6. Jan 25, 2015 #5
    Okay. That is actually kind of what I was getting to last night. I eventually sort of proved it to myself that 5 bits makes sense a bit after I had posted this. Thank you for the help!
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Entropy (Information Theory Question)
  1. Group theory question (Replies: 3)

Loading...