About markov source entropy

  • Thread starter Drao92
  • Start date
  • #1
72
0

Main Question or Discussion Point

Greetings,
I want to ask you somthing if i understood well this subject.
Lets say we have an order 1 binary source.
H(a)=-Paa*log(Paa)-Paa*log(Pab) bit/symbol.
From what i understand this is the average information of a symbol generated after an "a", like aa or ab.
Is it right???
 

Answers and Replies

  • #2
chiro
Science Advisor
4,790
131
Hey Drao92.

This is spot on and this entropy does give the information content of something following an a in the context of a markovian model.

If it wasn't markovian and was a general statement, then it would be a lot more complex (however it should always be bounded by this entropy figure).

As a footnote, recall that something of maximum entropy is purely random and the lower the entropy, the more order and less random a particular process or random variable (or distribution) is so if you have extra information that makes something less random, then the entropy will be lower.

This is the intuitive reason for the bound and being aware of this can be extremely useful when looking at entropy identities as well as for solving practical problems (like engineering stuff dealing with some maximal noise component).
 
  • #3
72
0
Sorry for late post. Can you tell me if this is correct.
The transition matrix is:
[0.2 0.8]
[1 0]
If the total etropy is H(0)+H(1)=e;
The quantity of information generated after a zero would be
e*0.2+e*0.8???
Because e is the mean informatiojn generated per symbol and 0.2 and 0.8 are the probabilities to generated 0 or 1 after a "zero".
 

Related Threads for: About markov source entropy

  • Last Post
Replies
9
Views
1K
  • Last Post
Replies
7
Views
1K
  • Last Post
Replies
6
Views
3K
  • Last Post
Replies
0
Views
1K
Replies
0
Views
2K
  • Last Post
Replies
6
Views
2K
  • Last Post
Replies
1
Views
4K
Top