About markov source entropy

In summary, the conversation discusses the calculation of average information for a binary source, specifically in the context of a Markovian model. The entropy is lower when there is more order and less randomness, and the transition matrix plays a role in the calculation of information generated. The formula for calculating information generated after a zero is e*0.2+e*0.8 where e is the mean information generated per symbol and 0.2 and 0.8 are the probabilities of generating 0 or 1 after a zero.
  • #1
Drao92
72
0
Greetings,
I want to ask you somthing if i understood well this subject.
Lets say we have an order 1 binary source.
H(a)=-Paa*log(Paa)-Paa*log(Pab) bit/symbol.
From what i understand this is the average information of a symbol generated after an "a", like aa or ab.
Is it right?
 
Physics news on Phys.org
  • #2
Hey Drao92.

This is spot on and this entropy does give the information content of something following an a in the context of a markovian model.

If it wasn't markovian and was a general statement, then it would be a lot more complex (however it should always be bounded by this entropy figure).

As a footnote, recall that something of maximum entropy is purely random and the lower the entropy, the more order and less random a particular process or random variable (or distribution) is so if you have extra information that makes something less random, then the entropy will be lower.

This is the intuitive reason for the bound and being aware of this can be extremely useful when looking at entropy identities as well as for solving practical problems (like engineering stuff dealing with some maximal noise component).
 
  • #3
Sorry for late post. Can you tell me if this is correct.
The transition matrix is:
[0.2 0.8]
[1 0]
If the total etropy is H(0)+H(1)=e;
The quantity of information generated after a zero would be
e*0.2+e*0.8?
Because e is the mean informatiojn generated per symbol and 0.2 and 0.8 are the probabilities to generated 0 or 1 after a "zero".
 

1. What is a Markov source?

A Markov source is a type of random process where the next state or event depends only on the current state or event, and not on any previous states or events. This is also known as the Markov property.

2. How is entropy related to a Markov source?

Entropy is a measure of the uncertainty or randomness in a system. In the context of a Markov source, entropy is used to measure the amount of uncertainty in predicting the next state or event based on the current state or event.

3. How is entropy calculated for a Markov source?

The entropy of a Markov source can be calculated using the Shannon entropy formula, which takes into account the probabilities of each state or event occurring. This formula can be adapted to account for the Markov property by only considering the probabilities of the current state and next state.

4. What is the significance of entropy in a Markov source?

Entropy is a useful metric for understanding the behavior of a Markov source. A higher entropy value indicates a more unpredictable or random source, while a lower entropy value indicates a more predictable source. This can be useful in various applications, such as data compression and information theory.

5. Can Markov sources be used in real-world applications?

Yes, Markov sources have a wide range of applications in various fields such as natural language processing, speech recognition, and financial modeling. They can be used to model and analyze complex systems and make predictions based on past behavior.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
2K
  • Special and General Relativity
Replies
6
Views
981
Replies
57
Views
3K
Replies
1
Views
1K
  • Programming and Computer Science
Replies
4
Views
2K
  • Electrical Engineering
Replies
4
Views
2K
Replies
1
Views
1K
  • Nuclear Engineering
Replies
25
Views
4K
Back
Top