Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

About markov source entropy

  1. Oct 28, 2012 #1
    Greetings,
    I want to ask you somthing if i understood well this subject.
    Lets say we have an order 1 binary source.
    H(a)=-Paa*log(Paa)-Paa*log(Pab) bit/symbol.
    From what i understand this is the average information of a symbol generated after an "a", like aa or ab.
    Is it right???
     
  2. jcsd
  3. Oct 29, 2012 #2

    chiro

    User Avatar
    Science Advisor

    Hey Drao92.

    This is spot on and this entropy does give the information content of something following an a in the context of a markovian model.

    If it wasn't markovian and was a general statement, then it would be a lot more complex (however it should always be bounded by this entropy figure).

    As a footnote, recall that something of maximum entropy is purely random and the lower the entropy, the more order and less random a particular process or random variable (or distribution) is so if you have extra information that makes something less random, then the entropy will be lower.

    This is the intuitive reason for the bound and being aware of this can be extremely useful when looking at entropy identities as well as for solving practical problems (like engineering stuff dealing with some maximal noise component).
     
  4. Nov 8, 2012 #3
    Sorry for late post. Can you tell me if this is correct.
    The transition matrix is:
    [0.2 0.8]
    [1 0]
    If the total etropy is H(0)+H(1)=e;
    The quantity of information generated after a zero would be
    e*0.2+e*0.8???
    Because e is the mean informatiojn generated per symbol and 0.2 and 0.8 are the probabilities to generated 0 or 1 after a "zero".
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook