How Does Markov Source Entropy Calculate Information for Binary Sources?

  • Context: Graduate 
  • Thread starter Thread starter Drao92
  • Start date Start date
  • Tags Tags
    Entropy Source
Click For Summary
SUMMARY

The discussion centers on the calculation of information for binary sources using Markov source entropy. The formula H(a) = -Paa*log(Paa) - Pab*log(Pab) accurately represents the average information of a symbol generated after an "a" in a first-order Markov model. Participants confirm that this entropy quantifies the information content following a specific state, with maximum entropy indicating randomness and lower entropy indicating order. The transition matrix provided ([0.2 0.8], [1 0]) is used to illustrate the calculation of information generated after a zero, reinforcing the relationship between probabilities and information content.

PREREQUISITES
  • Understanding of Markov models and their properties
  • Familiarity with entropy concepts in information theory
  • Knowledge of logarithmic functions and their applications
  • Basic grasp of probability theory and transition matrices
NEXT STEPS
  • Explore advanced Markov chain models and their applications
  • Study Shannon entropy and its implications in information theory
  • Learn about the relationship between entropy and data compression techniques
  • Investigate practical applications of entropy in engineering and noise reduction
USEFUL FOR

Students and professionals in data science, information theory researchers, and engineers dealing with noise and information processing will benefit from this discussion.

Drao92
Messages
70
Reaction score
0
Greetings,
I want to ask you somthing if i understood well this subject.
Lets say we have an order 1 binary source.
H(a)=-Paa*log(Paa)-Paa*log(Pab) bit/symbol.
From what i understand this is the average information of a symbol generated after an "a", like aa or ab.
Is it right?
 
Physics news on Phys.org
Hey Drao92.

This is spot on and this entropy does give the information content of something following an a in the context of a markovian model.

If it wasn't markovian and was a general statement, then it would be a lot more complex (however it should always be bounded by this entropy figure).

As a footnote, recall that something of maximum entropy is purely random and the lower the entropy, the more order and less random a particular process or random variable (or distribution) is so if you have extra information that makes something less random, then the entropy will be lower.

This is the intuitive reason for the bound and being aware of this can be extremely useful when looking at entropy identities as well as for solving practical problems (like engineering stuff dealing with some maximal noise component).
 
Sorry for late post. Can you tell me if this is correct.
The transition matrix is:
[0.2 0.8]
[1 0]
If the total etropy is H(0)+H(1)=e;
The quantity of information generated after a zero would be
e*0.2+e*0.8?
Because e is the mean informatiojn generated per symbol and 0.2 and 0.8 are the probabilities to generated 0 or 1 after a "zero".
 

Similar threads

  • · Replies 5 ·
Replies
5
Views
2K
Replies
3
Views
2K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 6 ·
Replies
6
Views
5K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K