# Upper bound for K-L divergence on discrete prob. space

## Main Question or Discussion Point

Does anyone know of any analytical expression for the upper bound on the Kullback–Leibler divergence for a discrete random variable?

What I am looking for is the bound expressed as

0 <= S_KL <= f(k)

Where k is the number of distinguishable outcomes.

Ultimately I am also looking for the bound in the case where the probability itself belongs to a special discrete set, rather than R. (This would correspond to combinatorics of microstates instead of a the continuum probability; edit: this means there is another variable entering the bound which is the sample/memory size, but the first question is what the limit is in sample size -> infinity.)

I think I looked for this long time ago. If anyone happens to just know a pointer to where this is worked out I would be interested.

The context of the question is to find a deeper understanding various of the entropy bounds in physics.

I am to start with not sure to what extent there are analytical expressions or if I need to make some computer simulations.

Edit: I'm not looking for some approximate (large k) bounds, I am looking for the actual dependence of the upper bound on k.

/Fredrik

Last edited:

Related Set Theory, Logic, Probability, Statistics News on Phys.org
Err, to talk about KL Divergences, you need two random variables. Also, there is no general upper bound: it can easily diverge.

I was sloppy. Yes there are two random variables, I was imagining maximizing over both, but of course your right, I just realized you're right that due to the continuum of the second variable it's indeed unbounded :tongue2:

But that's not the end fo the story. The problem is I didn't formulate the problem correctly. But the formlation of the mathematical problem, isn't a mathematical problem though. It seems the correct question for me is to find a probabilistic bound, like what's the probability for a certain bound. Which OTOH is even better, but then I suspect there is no simple formulates. I guess a computer simulation is the way to do.

And also in the real problem, there probability takes a discrete and finite spectrum of values from 0 to 1, rather than a continuum. The real number was just a simplification anyway, and I just realised that this leads to problems.

So there should be a probabilistic bound, in the sense that to a given confidence level, there is a bound to the expected divergence.

Thanks for your response to my silly question! It help me see that I will go back to my original context and think. Maybe the expressions I already have for the probability of frequency is the right way to go. There I effectively alread have probabilistic bounds of entropy. The divergent outcomes are unlikely in the first place.

/Fredrik

So there should be a probabilistic bound, in the sense that to a given confidence level, there is a bound to the expected divergence.
This still requires some further formulation. To speak of an "expected divergence," you'd need the distributions of the random variables to themselves be random variables, so you'll have to introduce at least one more random variable. Is this what you have in mind?

Another way to proceed would be to assume some kind of constraints on the distributions of the (two) random variables in question. For example, if you constrain them to always put non-zero probability mass on every element, then you can form a finite upper bound on the KL divergence.

This still requires some further formulation. To speak of an "expected divergence," you'd need the distributions of the random variables to themselves be random variables, so you'll have to introduce at least one more random variable. Is this what you have in mind?
Yes it's true that it requires more info. But since this is the math section I'm not sure if it's appropriate here.

What I am considering is something like this.

I am considering an observer, with limited complexity(memory). I am considering that this observer can distinguish certain set of finite outcomes. Beyond this, the observer can distinguish certain histories. But these are also finite because the limited complexity of the observer.

In this picture I imagine that there is an emergent, effective "probability space" that is defined by a "window" of history. So the observer tries to, by means of constructing a "probability space" and a "probability measure" from his limited history view, predict the future outcomes.

By construction he can never make deterministic predictions, moreover he can not even make deterministic predictions of the probability measure itself. Because the measure itself is uncertain. But if this was a real observer, he would have not choice to but to gamble and make the best choice he can.

Another way to proceed would be to assume some kind of constraints on the distributions of the (two) random variables in question. For example, if you constrain them to always put non-zero probability mass on every element, then you can form a finite upper bound on the KL divergence.
Later on I will allow the memory to form substructures, of communicating or "cooperating" microstructures, each defining an "effective discrete, finite probability space". Non-occupied states will be rationalized away.

One can to start with picture the multinomial case, and ask what is the probability for finding a certain frequency for a give sample size, given a prior. Unless I made some more mistake I think one simple one bound seems would be

$$S_{KL} <= 1/M \cdot ln \left\{ \frac{M!}{(1-P)} \frac{ \prod_{i=1..k} \rho_{i}^{(M\rho_{i})} }{ \prod_{i=1..k} (M\rho_{i})! } \right\}$$

Where M is the sample size of history, P is an improved "probability" for the bound.
Ie. S_KL <= C at probability P. Clearly -> infinity as P -> 1, but in my context p=1 is unphysical, there is never certainty, so the physical situation P=1 never happens.

I use probability in quotes because I'm trying to tweak the terminology to define it inductively.

So I'm not doing strict probability theory, I'm trying to apply it inductively and extend the logic a bit, but constrained by total complexity. But this is just the first step. The next step is to understand how the predictabiltiy can increase if the memory is share, and forming communicating structures. So far this is pretty much classical statistics, the idea is that quantum statistics will emerge in the development, as beeing selected more predictive power (ie fitness).

I will keep expanding this. Your first answer make me see be basic mistake. The semi-probabilistic bound is an even more beutiful solution. But this is only the first step in building the communicating strucutres. I'll keep thinking.

Note. the motivation for this, is my opinion that normal probaiblity theory as applied to QM, really does not make sense from a physical point. Mainly due to the fact that complexity bound is not included. I'm also relating this to the bekenstein bound. But I am investigating a resconstruction of a modified "logic" that will be respect complexity bounds, so that there is a limit to how large sampels you can retain, and thus the bayesian estimates from that only subjective probabilities. But not only is the event space discrete and finite, also the probability is so, at least in the reconstruction I have in mind.

/Fredrik

I'd suggest taking a look at network information theory. Engineers in signal processing have been worrying about finite-complexity, finite-memory, finite-precision distributed estimation for some time now. I'm less sure about the connections to physics, but the basic framework you describe is the sort of thing that is studied in the context of sensor networks pretty intensively.

Also, simply throwing out elements that the observer assigns probability zero is not going to help, as it's exactly these elements that give you infinite KL divergence (provided the true probability of these events is nonzero). You may want to look at information theory literature on universal estimation/coding, where some authors have considered the problem of how much probability should be assigned to unobserved events (you expect it to go to zero as you make an infinite number of observations, of course, but it's interesting to consider what to do when you've just started the observation process).

Thanks for the pointers. Maybe I will dig up a book on that and see if it contains any novel approaches fruitful to my ideas.

I'm less sure about the connections to physics
I guess this is what I am trying to find out. Formally, I have not definitive answer yet, but conceptually and intuitively I see a clear connection.

Also, simply throwing out elements that the observer assigns probability zero is not going to help, as it's exactly these elements that give you infinite KL divergence (provided the true probability of these events is nonzero). You may want to look at information theory literature on universal estimation/coding, where some authors have considered the problem of how much probability should be assigned to unobserved events (you expect it to go to zero as you make an infinite number of observations, of course, but it's interesting to consider what to do when you've just started the observation process).
The questions you raise here are relevant, but I am taking the view of gaming here. How do you rate your own ignorance? IMHO, there is no way rate your own ignorance by means of a certain measure. I think in the framework of learning and evolution the key is relative progress by gambling, and there is not way I see to for a player to KNOW that he has the optimum strategy, instead his actions can only be based upon incomplete reasoning about this supposed optimum strategy. What is "optimum" is in motion.

I can't describe all my thinking here, but there is a logic behind throwing out the "non-excited" states - they are in a certain way not distinguishable. Also, to maintain "empty bins" means you need to at least store the information about the bin index, and without support that structure isn't favourable.

Also, what is the "true" probability is in the skies only. I'm arguing for no such backgrounds. The prior I use in the KL measure, is not absolute - it's a function of a history window only, and there is a feedback between the prior and the expected frequencies. which means that "at equilibrium" the prior is always update to minimize expectex divergence, and if unexpected divergences appear, then clearly the prior is updated. Then I imagine intertia of updating this, relative to incoming info, so at chaotic conditions, the prior doesn't stabilise and there are alot of divergences. Then unless the memory record is reorganised in time, it will decay and loose confidence in itself.

So the conceptual connecion to physics should be clear.

There are also fuzzy conceptual connections to "blackbody radiation" which might be seen as bleeding off excess information, and of course, it the observer is at equilibrium, the excess information should be distributed "as randomly as possible" relative to himself.

The prior structurs are also slowly updating, but their complexity implies an inertia. And the prior serves as a guide for "decision making" or gambling, in the spirit of minizing divergence. Here the conceptual step to geometry and action principles are small.

But to start somewhere I am try to elaborate some bits one by one.

/Fredrik

The questions you raise here are relevant, but I am taking the view of gaming here. How do you rate your own ignorance? IMHO, there is no way rate your own ignorance by means of a certain measure.
Actually, it turns out that there is, at least in certain settings. The usual example that's used to motivate these ideas is of a zoologist that's sent on a mission to a recently-discovered island. His job is to walk around the island, cataloguing all the new animals he sees, and figuring out their relative populations. He has no idea how many different animals there are, or how many total animals live on the island. The big question, then, is "how does he know when he's seen all of the animals?" Equivalently, what probability does he assign to the event "the next animal will be one I haven't seen before"? And it turns out that there is a systematic way to do this that is, in a certain sense, optimal. To get some intuition, he starts out sure that he is completely ignorant: the probability of a new animal is 1. As he walks around an observes animals, then, his estimate of the probability of a new animal goes down at a certain rate. But he must never assign 0 probability to that event, lest he unexpectedly run accross a lion.

I think in the framework of learning and evolution the key is relative progress by gambling, and there is not way I see to for a player to KNOW that he has the optimum strategy, instead his actions can only be based upon incomplete reasoning about this supposed optimum strategy. What is "optimum" is in motion.
It's true that he has no way of knowing that a particular strategy is the optimum, but he CAN know that his method for picking a strategy based on past data is optimal, at least in certain settings.

I can't describe all my thinking here, but there is a logic behind throwing out the "non-excited" states - they are in a certain way not distinguishable.
True, but that only justifies lumping them together, not throwing them out entirely. You should lump them into a single "previously unseen event" bin; throwing them out amounts to expressing certainty that there are no as-yet-unseen events.

I would also look around for material on nonstationary/adaptive estimation.

Actually, it turns out that there is, at least in certain settings. The usual example that's used to motivate these ideas is of a zoologist that's sent on a mission to a recently-discovered island.
...
And it turns out that there is a systematic way to do this that is, in a certain sense, optimal.
Ok, you seen to be talking here about a strategy to "catalogue the unknown", and thus indirectly catalogue your ignorance. I definitely agree that is possible. What I mean is that implementing this strategy is a "process", and during the process you don't know what you don't know. But that doesn't mean that we can't guess :) So I think we agree here, but I think that the strategy of cataloguing or learning, may not be unique, and even the strategy itself may through feedback of discoveries be changed.

So I then turn my attention to "selection of strategies". Because even if a strategy in some sense was optimal, if the environment changes, this may not longer be optimal.

So we're looking for something like a natural selection of strategies. And organisms and observers are I think manifestations of such strategies. Those strategies who are really bad, simply wont be around except transiently. I am looking for a framework for this generic inductive learning.

The problem of establishing what "optimal" learning is, is what I imagine will be the key to definition of time. The fact that there is always a direction of "progress" is I think related to the arrow of time.

It's true that he has no way of knowing that a particular strategy is the optimum, but he CAN know that his method for picking a strategy based on past data is optimal, at least in certain settings.
Yes, it's something like this I am talking about too. But even the statement "optimum based on past data is not trivial" because in my thinking at least, I think that the logic of reasoning that decideds wether the strategy is optimal based on a given known history is also relative. This is why I picture that, since the logic implicit in the strategy and "guess-generation" is also in motion, the logic of resoning that determines the optimum based on the history is not universal, but rather the logic implicit in history.

The association I make here is that "first line history" is simple the information processed and retained, and the logic of reasoning is the laws of nature, if we associate the logic of reasoning (the logic emplyed by a player making decisions on imcomplete info) with the laws of nature.

So the future is determined by a window of history, and a the previous state of reasoning.

True, but that only justifies lumping them together, not throwing them out entirely. You should lump them into a single "previously unseen event" bin; throwing them out amounts to expressing certainty that there are no as-yet-unseen events.
Ah ok. I see what you mean. I caused confusion becaues I use the word probability in a special way, with a deformed logic. My suggestion is that we can define all the historical probabilities we know, but we can never _know_ that the future relative frequences will fit this. But we can argue that there is, as per the given logic, a best guess we can do.

So when I suggest that the "probability of unseen events is zero", you are sure right that it can still be the case that the unseen events can be seen in the future), I solve that by induction - by introducing a probability for the probability, or a notion of uncertainty in the probability. I guess that is another way, instead of adding a "blind-bin" for unexpected events.

So, probability zero does not mean it wont happen, I interpret it so that one should not placed bets on them (*) But if they are thrown in our face, then we have a logic to handle it, because our estimation of bets are first never perfect, and second always in motion.

(*) My idea is that this will also be a key to stability! If all players in the game behaves like this, it's easy to see how local stability and structure emerges.

So this all seem to suggest a dynamics.

/Fredrik