Need sum help (information theory)

In summary, the conversation is discussing a problem in information theory involving a widget factory and a conveyor belt with a certain number of defective widgets. The goal is to find the mutual information between the random variables representing the number of defective widgets in a sample and in the whole lot. Various formulas and approximations are being explored to solve this problem, with the final result being a nonnegative expression that depends on the sample size and the number of defects in the whole lot.
  • #1
Adeimantus
113
1
need sum help please:) (information theory)

Hello.

I'm working on an information theory-related problem that involves doing a nasty sum. The problem is this: in a widget factory there is a conveyor belt with N widgets on it, and an unknown fraction [itex]\xi = a/N[/itex] of them are defective. You examine a sample of n widgets and find that a fraction [itex] \eta = b/n [/itex] is defective. What is mutual information [itex]I(\eta : \xi)[/itex] between the random variables [itex]\eta[/itex] and [itex] \xi [/itex] ? The idea, i think, is to see how large a sample n you need to take so that the sample defect rate gives you information about the actual defect rate. Let A_a be the event that there are a defective parts in the whole lot and B_b be the event that there are b defective parts in the sample. Then the formula for mutual information is:

[tex] I (\eta : \xi) = \sum_{a=1}^{N} \sum_{b=1}^{n} P(A_{a}) P(B_{b} | A_{a}) \log_{2} { \frac{P(B_{b} | A_{a}) }{P(B_{b})} } [/tex]

which is always nonnegative. Here's what I've got so far: P(A_a) = 1/N by principle of insufficient reason (a could be anything from 1 to N with equal probability), and

[tex]
P(B_{b} | A_{a}) = \frac{ \left( ^{a}_{b} \right) \left( ^{N-a}_{n-b} \right) }{ \left(^{N}_{n} \right) }
= \frac{ \left( ^{n}_{b} \right) \left( ^{N-n}_{a-b} \right) }{ \left(^{N}_{a} \right) }
[/tex]
[tex]
P(B_{b}) = \sum_{a=1}^{N} P(A_{a}) P(B_{b} | A_{a}) = \sum_{a=1}^{N} \frac{1}{N} \frac { \left( ^{n}_{b} \right) \left( ^{N-n}_{a-b} \right) }{ \left(^{N}_{a} \right) }
\approx
\int_{0}^{1} \left( ^{n}_{b} \right) x^{b} (1 - x)^{n-b} dx = \frac {\left( ^{n}_{b} \right)}{ \left( ^{n}_{b} \right) (n+1)} = \frac {1}{n+1}
[/tex]

if you pretend it is a Riemann sum and assume that [itex]N \gg n[/itex] and [itex]a \gg b[/itex], which I'm not sure is OK to do. I'm guessing the idea is to get some asymptotic formula for the mutual information as N becomes large, but how do you retain the dependence on N in the sum? For instance, if I apply the "large N" approximation for [itex]P(B_{b} | A_{a}) [/itex], which is [itex]\left( ^{n}_{b} \right) \left(\frac{a}{N}\right)^{b} \left(1 - \frac{a}{N} \right)^{n-b}[/itex], and do the Riemann sum I get an expression that has no dependence on N and apparently diverges to negative infinity (weird because mutual information is nonnegative).

This is not a homework problem, just a "something to think about" problem I came across in an informal book on information theory.

thanks
 
Last edited:
Physics news on Phys.org
  • #2
sorry for the double post...it was an accident. maybe the admin/moderator can delete the other one...
 
  • #3
Adeimantus said:
Here's what I've got so far: P(A_a) = 1/N by principle of insufficient reason (a could be anything from 1 to N with equal probability),

Don't forget the case [itex]a=0[/itex] ;]

Adeimantus said:
[tex]
P(B_{b}) = \sum_{a=1}^{N} P(A_{a}) P(B_{b} | A_{a}) = \sum_{a=1}^{N} \frac{1}{N} \frac { \left( ^{n}_{b} \right) \left( ^{N-n}_{a-b} \right) }{ \left(^{N}_{a} \right) }
\approx
\int_{0}^{1} \left( ^{n}_{b} \right) x^{b} (1 - x)^{n-b} dx = \frac {\left( ^{n}_{b} \right)}{ \left( ^{n}_{b} \right) (n+1)} = \frac {1}{n+1}
[/tex]

if you pretend it is a Riemann sum and assume that [itex]N \gg n[/itex] and [itex]a \gg b[/itex], which I'm not sure is OK to do.

Don't you also have to assume that [itex]N\to\infty[/itex] for the Riemann sum? But the final answer seems reasonable regardless: uniform over the number of possible samples, just like the principle of insufficient reason got us for [itex]P(A_a)[/itex].

Plugging those three expressions for [itex]P(A_a)[/itex], [itex]P(B_b)[/itex] and [itex]P(B_b|A_a)[/itex] into the formula for mutual information gives a not-terribly tedious result:

[tex]
I(A,B) = \log_2(n+1) + \frac{1}{n+1}\sum_{b=0}^n \log_2\left( ^{n}_{b} \right) + \frac{1}{N+1}\sum_{b=0}^n \sum_{a=0}^N \frac{\left( ^{n}_{b} \right)\left( ^{N-n}_{a-b} \right)}{\left( ^{N}_{a} \right)}\log_2\left( ^{N-n}_{a-b} \right) - \frac{1}{N+1}\sum_{a=0}^N \log_2\left( ^{N}_{a} \right)
[/tex]

I'm sure you can simplify that further, and maybe use a clever approximation or two, to get something that provides more intuition, though.
 
Last edited by a moderator:
  • #4
quadraphonics said:
Don't forget the case [itex]a=0[/itex] ;

Ah yes! I forgot both a=0 and b=0. Shame on me! I was also a little to quick to substitute the large N approximation for P(B_b|A_a). I get the same expression you do after simplification. However, since I am assuming N >> n and a >> b, I decided to use the other exact expression for P(B_b|A_a), namely:

[tex]
\frac{ \left(^{a}_{b}\right)\left(^{N-a}_{n-b}\right) }{\left(^{N}_{n}\right)}
[/tex]

to get a similar expression for I(A,B)

[tex]
I(A,B) = \log_2(n+1) -\log_2{\left(^{N}_{n}\right)} + \frac{1}{N+1}\sum_{b=0}^n \sum_{a=0}^N \frac{\left( ^{a}_{b} \right)\left( ^{N-a}_{n-b} \right)}{\left( ^{N}_{n} \right)}\log_2{\left(^{a}_{b}\right)\left( ^{N-a}_{n-b} \right) }
[/tex]

I did this so I could apply the Stirling approximation to the binomial coefficients where the top is much larger than the bottom:

(P choose Q) ~ (P/e)^Q / Q! when P>>Q

I applied this approximation in the argument of the log function in the double sum, and after much simplification I got

I(A,B) = log_2(n+1) + (1/(n+1))*Sum_b {log_2 (n choose b)}
+ (2n/(N+1))*Sum_a {(a/N) log_2 (a/N) }

At least this is nonnegative. I'm still trying to figure out what the significance of it is, other than that for very large N the mutual information apparently depends on n only.

thanks for the help!
 
Last edited:
  • #5
Sounds good. Another approximation that might be useful here is:

[tex]
\log\left( ^n _k \right) \leq k\left(1 + \log(\frac{n}{k})\right)[/tex]

which, IIRC, is tight for [itex]k<<n[/itex]
 
  • #6
That's a good suggestion. With that approximation I get

[tex]
I(A,B) \leq \log_2(n+1) - \log_2{ \left(^N_n\right) } + { n } + \frac {2n}{N(N+1)} \sum_{a=0}^{N} {a\log_2a} - \frac{2}{n+1}\sum_{b=0}^{n} {b\log_2b}
[/tex]

Not only should I(A,B) be nonnegative, but it should also be less than or equal to log_2 (n+1), the uncertainty H(B). Just eyeballing it, the above looks like it meets that requirement, but I haven't proved it yet. Still, I think this approximation is easier to work with than the first one I used. I'll think about it some more.
 
  • #7
There's probably a good bound for those xlogx summations, but I can't recall one offhand.

However, I think there's a different approach to this problem that might be closer to what the author had in mind. Instead of using the principle of insufficient reason to assume a uniform distribution on [itex]A[/itex] (and so on [itex]B[/itex] as well), let's make some different assumption based on the idea that this is an assembly line in a factory. Specifically, let's model the widget assembly process as an i.i.d. sequence of Bernoulli trials, with some failure probability [itex]\gamma[/itex]. Then, [itex]A[/itex] has a binomial distribution:

[tex]
P(A_a) = \bold{B}(a;N,\gamma) = \left(^N _a\right) \gamma^a(1-\gamma)^{N-a}
[/tex]

where I'm using a bold [itex]\bold{B}[/itex] for the Binomial distribution, versus a regular [itex]B[/itex] for the random variable. Using the same expression for [itex]P(B_b|A_a)[/itex] as before, it turns out that [itex]B[/itex] is distributed as [itex]\bold{B}(b;n,\gamma)[/itex] and, more interestingly, [itex]P(A_a|B_b) = \bold{B}(a-b;N-n,\gamma)[/itex].

Recall this expression for mutual information:

[tex]
\mathcal{I}\left(A;B\right) = \mathcal{H}(A) - \mathcal{H}(A|B)[/tex]

where

[tex]
\mathcal{H}(A|B) = -\sum_b P(B_b) \sum_a P(A_a|B_a) \log_2 P(A_a|B_b)[/tex].

That is, we're expressing the mutual informaton as the average number of bits required to describe the number of bad widgets in the entire lot before doing the sample, minus the number required after the sample has been completed. Since [itex] b [/itex] only appears in [itex]P(A_a|B_b)[/itex] as an offset, the inner summation in the expression for [itex]\mathcal{H}(A|B)[/itex] does not depend on [itex]b[/itex] (which is a consequence of the i.i.d. model of the widget assembly process), and so we get:

[tex]
\mathcal{I}\left(A;B\right) = \mathcal{H}\left(\bold{B}(N,\gamma)\right) - \mathcal{H}\left(\bold{B}(N-n,\gamma)\right)[/tex]

where [itex]\mathcal{H}\left( \bold{B}(N,\gamma) \right)[/itex] denotes the entropy of a Binomial-distributed random variable with parameters [itex]N[/itex] and [itex]\gamma[/itex]. At this point, without even worrying about obtaining a particular expression for the binomial entropy, we can get some intuition about the mutual information. That is, the first term is the number of bits required to describe the failures in a lot of [itex]N[/itex] widgets, while the second is the is the number required to describe the failures in [itex]N-n[/itex] widgets, where both lots are Binomial with the same failure rate. So, in a sense, the mutual information consists of a reduction in the size of the untested lot of widgets.

While I'm not aware of an exact expression for the entropy of the binomial distribution, there is a very popular approximaton for large [itex]N[/itex] and fixed [itex]\gamma[/itex]:

[tex]
\bold{B}(a;N,\gamma) \approx \bold{N}(a;N\gamma,N\gamma(1-\gamma))
[/tex]

where [itex]\bold{N}(a;\mu,\sigma^2)[/itex] is the Normal distribution with mean [itex]\mu[/itex] and variance [itex]\sigma^2[/itex]. The differential entropy, in nats, of the Normal distribution is given by [itex]\frac{1}{2}\ln\left( 2\pi e \sigma^2 \right). [/itex]Plugging this approximation into the previous expression gives a very tidy result:

[tex]
\mathcal{I}(A;B) \approx \frac{1}{2}\ln\frac{N}{N-n}
[/itex]

Note that this expression does not depend on [itex]\gamma[/itex], so we avoid the issue of having introduced a spurious parameter not mentioned in the original problem. Also, that answer is expressed in nats, not bits.
 
  • #8
quadraphonics said:
There's probably a good bound for those xlogx summations, but I can't recall one offhand.

However, I think there's a different approach to this problem that might be closer to what the author had in mind. Instead of using the principle of insufficient reason to assume a uniform distribution on [itex]A[/itex] (and so on [itex]B[/itex] as well), let's make some different assumption based on the idea that this is an assembly line in a factory. Specifically, let's model the widget assembly process as an i.i.d. sequence of Bernoulli trials, with some failure probability [itex]\gamma[/itex]. Then, [itex]A[/itex] has a binomial distribution:

That is a much more realistic model than the uniform distribution. It fits in with the usual communications idea of a symbol source and channel that has a fixed error rate.

Using the same expression for [itex]P(B_b|A_a)[/itex] as before, it turns out that [itex]B[/itex] is distributed as [itex]\bold{B}(b;n,\gamma)[/itex] and, more interestingly, [itex]P(A_a|B_b) = \bold{B}(a-b;N-n,\gamma)[/itex].

That last part was a total surprise to me. Did not expect everything to cancel out so neatly. Cool.


Recall this expression for mutual information:

[tex]
\mathcal{I}\left(A;B\right) = \mathcal{H}(A) - \mathcal{H}(A|B)[/tex]

where

[tex]
\mathcal{H}(A|B) = -\sum_b P(B_b) \sum_a P(A_a|B_a) \log_2 P(A_a|B_b)[/tex].

That is, we're expressing the mutual informaton as the average number of bits required to describe the number of bad widgets in the entire lot before doing the sample, minus the number required after the sample has been completed. Since [itex] b [/itex] only appears in [itex]P(A_a|B_b)[/itex] as an offset, the inner summation in the expression for [itex]\mathcal{H}(A|B)[/itex] does not depend on [itex]b[/itex] (which is a consequence of the i.i.d. model of the widget assembly process)...

That one also came as a surprise. I see how it comes out that way, in that all occurences of 'b' are of the form 'a-b' and the sum goes from a-b = 0 to a-b = N-n, so that you can just replace a-b with a different index. But it still seems surprising that

H(A| B=b) does not depend on b.

and so we get:

[tex]
\mathcal{I}\left(A;B\right) = \mathcal{H}\left(\bold{B}(N,\gamma)\right) - \mathcal{H}\left(\bold{B}(N-n,\gamma)\right)[/tex]

That is a very nice intuitive result! I also like the fact that so far no assumptions have been made about the relative sizes of N, n, a, and b.

The differential entropy, in nats, of the Normal distribution is given by [itex]\frac{1}{2}\ln\left( 2\pi e \sigma^2 \right). [/itex]Plugging this approximation into the previous expression gives a very tidy result:

[tex]
\mathcal{I}(A;B) \approx \frac{1}{2}\ln\frac{N}{N-n}
[/itex]

Yes, this must be what the author had in mind. I haven't learned about differential entropy yet, but I'm assuming it extends discrete entropy to the continuous by replacing the sum with an integral. To convert to bits in this expression, you would just replace the 'ln' with 'log_2', right? It makes sense in the N >> n case: you would only gain a tiny fraction of a bit, proportional to n/N, by taking the sample. On the other hand, if you sampled half of the N widgets, you would gain a half bit of information. Somehow that doesn't seem like a whole lot, considering the uncertainty of A is of order log(N). Does this mean widget makers should quality test almost all their widgets?
 
  • #9
Adeimantus said:
That is a much more realistic model than the uniform distribution. It fits in with the usual communications idea of a symbol source and channel that has a fixed error rate.

Yeah, in engineering-related fields like information theory, the phrase "a factory has an assembly line producing widgets" is basically a codeword for "Bernoulli process."

Adeimantus said:
That last part was a total surprise to me. Did not expect everything to cancel out so neatly. Cool.

Yeah, it was partly from looking at the expression for [itex]P(B_b|A_a)[/itex] that motivated me to use binomial marginal distributions.

Adeimantus said:
That one also came as a surprise. I see how it comes out that way, in that all occurences of 'b' are of the form 'a-b' and the sum goes from a-b = 0 to a-b = N-n, so that you can just replace a-b with a different index. But it still seems surprising that

H(A| B=b) does not depend on b.

Right, intuitively one expects the former to depend on the latter. Certainly, for more complex models of the widget assembly process, that will be the case (in fact, it is the case for the previous assumptions, with uniform marginals, right?). So this set of assumption is a special case: since we've assumed that each widget is assembled independently of the others, testing some subset of them doesn't let you infer much about the untested ones, only reduce their number. That everything works out so nicely for the binomial case seems, to me, to be a strong indicator that this is the "right" answer.

Adeimantus said:
Yes, this must be what the author had in mind. I haven't learned about differential entropy yet, but I'm assuming it extends discrete entropy to the continuous by replacing the sum with an integral. To convert to bits in this expression, you would just replace the 'ln' with 'log_2', right?

Yeah, that's the idea. The interpretation of differential entropy is a little different, in that it would always take an infinite number of bits to describe the value of a continuous random variable exactly. So, the differential entropy tells you the number of bits required to specifiy the r.v. to unit resolution.

Adeimantus said:
It makes sense in the N >> n case: you would only gain a tiny fraction of a bit, proportional to n/N, by taking the sample.

And specifically, for n=0, you get exactly 0 bits of information.

Adeimantus said:
On the other hand, if you sampled half of the N widgets, you would gain a half bit of information. Somehow that doesn't seem like a whole lot, considering the uncertainty of A is of order log(N). Does this mean widget makers should quality test almost all their widgets?

One place that the differential entropy approximation shows it faults is for n=N, in which case it blows up. This is not such a big deal for an actual differential entropy, as it's possible to have infinite mutual information between two continuous r.v.'s, each of which have finite differential entropy (more generally, the differential mutual information is not required to be less than the differential entropy of each variable).

As for interpretations, to properly design a widget test process, you'd want to use more conventional statistical approach, where you specify some tolerance on the number of total failures, and then work out how many you need to test to fulfill that tolerance with, say, 95% confidence. It's not immediately clear what value X bits of mutual information between the test and total sets would be, unless you intend to build a Huffman code for the number of failures or something like that. I think the idea is more to have you work through an example that's common to other areas of prob/stat, and so check some of your intuitions about mutual information.
 
  • #10
Awesome. You have been a huge help. Thanks again!
 

What is information theory?

Information theory is a branch of mathematics that deals with the quantification, storage, and communication of information. It was founded by Claude Shannon in the 1940s and has since been applied in various fields such as computer science, telecommunications, and neuroscience.

What is the purpose of information theory?

The purpose of information theory is to study and understand the fundamental limits of communication and information processing. It also provides tools and techniques for efficient data compression, error correction, and encryption.

What are the key concepts in information theory?

The key concepts in information theory include entropy, information content, channel capacity, coding theory, and noise. Entropy measures the uncertainty or randomness of a message. Information content refers to the amount of information contained in a message. Channel capacity is the maximum rate at which information can be reliably transmitted through a communication channel. Coding theory deals with the efficient encoding and decoding of information. Noise refers to any unwanted disturbances that can affect the transmission of information.

How is information theory used in technology?

Information theory has many practical applications in technology. It is used in the design of communication systems, such as wireless networks and the internet. It is also used in data compression algorithms, which are essential for storing and transmitting large amounts of data efficiently. Information theory is also used in cryptography, which is the study of secure communication and data protection.

What are the future developments in information theory?

As technology continues to advance, information theory will play a crucial role in enabling faster and more efficient communication and data processing. Some of the future developments in information theory include the application of quantum information theory, which deals with the transmission and processing of information using quantum systems, and the development of new coding techniques for more reliable and secure communication.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
Replies
0
Views
234
  • Set Theory, Logic, Probability, Statistics
Replies
0
Views
963
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
880
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
767
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
18
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
3
Views
805
  • Set Theory, Logic, Probability, Statistics
Replies
8
Views
1K
Back
Top