1. Limited time only! Sign up for a free 30min personal tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Limits involving logs, negative inf and probabilities.

  1. Jan 29, 2012 #1
    Hello!
    I am playing around with an equation (i.e. it's not a textbook question), and I arrived at the following problem:

    The equation is:
    [tex]A = -1*\sum_{i=1}^{N}*\log_{N}(P_{i})*P_{i}[/tex]
    Pi is less than or equal to 1 and more than or equal to 0, and is a probability of finding an object in a particular state out of N states.
    I was looking at the limiting values for A. If the probabilities are all equal, A = 1. If the probability of one state approaches 1 and the other states therefore approach 0, then I get:

    [tex]\lim_{P_{X} \rightarrow 1}(\log_{N}(P_{X})*P_{X})[/tex]
    For the state which has a probability approaching one.

    [tex]\lim_{P_{Y} \rightarrow 0}(\log_{N}(P_{Y})*P_{Y})[/tex]
    For the other states, Y, Z, etc, whose probabilities are approaching 0.

    I realise probabilities do not change, but what I mean by approach is that I want to look at the uppermost and lowermost values for A. I Cannot plug in P = 0, because 1) I don't think there is a solution for log(0) and 2) if the probability was actually = 0 then there wouldn't be N possible states. However, I think the extreme values of A are obtained under the conditions when N is equal for all, and therefore A equals 1, and when the probability of one particular state is much greater than all other states.
    Having said this: for the first, the log = 0, and P = 1, and so this term in the sum is 0. I get stuck with what are essentially all the other components in the sum, because the limit as P [itex]\rightarrow[/itex] 0 for the log(P) is -[itex]\infty[/itex]., and P tends to 0. I do not know what to do here.
    I apologise if the notation is unconventional, I hope it's correct.

    Thanks in advance,
    Nobahar.
     
    Last edited: Jan 29, 2012
  2. jcsd
  3. Jan 30, 2012 #2
    I was expecting it to yield zero, and I think this is the case. I converted it to a quotient - which is log(P)/(1/P) and then converted to natural logarithms - and applied L'Hopital's rule, it produces ln(N)*-P for the function to evaluate. It yielded zero. Anyone want to check if this is correct, it would be much appreciated?
    Many thanks.
     
  4. Jan 30, 2012 #3

    Päällikkö

    User Avatar
    Homework Helper

    This is a standard problem in entropy computations. The minimum entropy you will get when you know it is in a certain state, i.e. there is one P with prob. 1 and the rest are 0. The max you will get if you have no idea, i.e. everything has the same probability. As an engineer, I'd say if there are states whose probability is 0, then there are no such states (although a mathematician would disagree, "almost never"). This all you have managed to calculate and to deduce, well done. As for your calculations, indeed, if you get 0 for a quantity that is clearly non-negative, you have found a minimum.

    A more natural way to arrive and to prove your solution is the extreme, you could use Lagrange multipliers and find the extremes of:
    [tex]\sum_{n=0}^N P_n \log_NP_n + \lambda\sum_{n=0}^N P_n[/tex]
    (partial diff. w.r.t. Pns and set each one of these to zero and use the fact that the sum of probabilities is 1).
     
  5. Jan 31, 2012 #4
    Hi Paallikko, thanks for the response.

    Yes, the equation is indeed from entropy that I was reading about!

    The subject of information is then raised, and is described as the difference between the entropy before and the entropy after some knowledge is acquired about the system. This is because the probabilities of certain states change: some are increased, some reduced or eliminated. It doesn't say some are increased, but this is possible, right? Is it not possible that in some 'set-up' there are some states with a fixed possibility of occurrence?, that it doesn't have to be the case that one state is definite?; instead, there can be some intrinsic probability as to the state of the system?

    I ask this because it makes me wonder what the consequences are for information. If the information about a system is defined as the difference in entropy, then if there is some intrinsic probability, such that all that is possible to know about the system still means that there is some finite probability attached to more than one state, then surely you have ALL the information about the system. Yet this would require the difference in entropies to be max entropy - 0 entropy? If there is a finite probability attached to more than one state, then there is a kind of 'residual entropy' that can't be eliminated. Does this mean that all the information about the system can never be known? It seems to me that it can all be known, it's just that there is a probability attached to certain states?

    I hope this makes sense, I can link to some sources or expand if it isn't clear.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Limits involving logs, negative inf and probabilities.
Loading...