Information loss as a grid is coarsened

wvguy8258
Messages
48
Reaction score
0
Hi,

Not sure if this is the correct sub-forum or not. Perhaps, general math is better. Anyways..

In the following, a simple reference covering what I am after would be very helpful.

Let's say you have a 4x4 grid of cells each cell contains either a 1 or 0. Let's say it is this.

0101
1010
0101
1010


And it covers a certain spatial area of let's say 4 m X 4, so the resolution of each cell is 1 by 1 m.

If I coarsen the resolution of the grid so that it is now a 2X2 grid covering the same area then I will take the average value for each of the four cells collapsed by aggregation and assign the average value to the new cells. So we have

0.5 0.5
0.5 0.5

Is there a way to capture the information lost in this aggregation? I suppose it could be thought of as the aggregate grid being known and then the original 4x4 grid being a signal and determining how much information is in the original grid given that you know the aggregate values.

Further, if I attempted to estimate the values in the 2x2 grid using some method and came up with

0.4 0.6
0.3 0.2

Is there a way to determine the amount of information held in the 2x2 grid of 0.5 values given that we know the estimate? I suppose this is the amount of "surprise" in the values of the 2x2 grid given the model.

The reason I am asking this. I model land use change using satellite imagery to determine land cover and then try to predict locations of change by using information on things that likely influence land cover change (like road location, topographic slope, etc). You can often increase model accuracy by aggregating the satellite imagery. So, you gain predictive ability but on a data set where information has been lost. I am trying to better understand this trade-off so that recommendations can be made regarding the appropriate level of data coarsening.

Thanks for reading,

Seth
 
Physics news on Phys.org
The discipline of "Information Theory" uses methods that assign a measure of information to situations involving probability. For example, on a map where the most likely values in your 4x4 grid were all zeroes then the reduction to a 2x2 grid involves less of a loss of information than on another map where 1's and 0's happen with equal frequency. If you want a measure of information based on Information Theory, you need to make assumptions about probabilities.

Gain and loss of information will depend on what probability distributions you use as your "before" and "after" cases. Defining a probability distribution includes defining the random variable(s) that it involves. For example, you might regard your 4x4 matrix of data as a being generated by an even finer grid or even by a spatially continuous randoms variables. The Entropy of this underlying distribution can be calculated. Given a particular 4x4 matrix the conditional probability distribution for the underlying data given that matrix will typically have less Entropy than the unconditional distribution. If we average this Entropy over all 4x4 matrices (weighted by their probability of occurrence) then we get the average entropy of the various conditional distributions. The difference of between this average entropy and the entropy of the unconditional distribution is a measure of how much Entropy loss (= gain in certainty) we get from have the 4x4 matrix data.

A similar calculation can be done for the 2x2 matrices.

You might want to avoid defining an underlying probability distribution for the data and only define a probability distribution on the 4x4 matrices. Thus you assume that knowing the 4x4 matrix is "knowing everything" so knowing it reduces Entropy to zero. You can calculate the average entropy of conditional distributions given the various 2x2 matrices and call that entropy, the gain in entropy ( = increase in uncertainty) from summarizing the data in 2x2 form.

This shows that there are subjective aspects involved in defining information.

This is good and inexpensive book on the subject: "An Introduction to Information Theory" [Paperback]
by Fazlollah M. Reza
 
Last edited:
Thank you for your detailed response.

If the only thing you know of the original data is that it is binary (0/1) and do not assume any underlying distribution, then would the information in bit just be 16?
 
wvguy8258 said:
If the only thing you know of the original data is that it is binary (0/1) and do not assume any underlying distribution, then would the information in bit just be 16?

I think the numbers that people give for "information in a bit" are based on the assumption that 0 and 1 are equiprobable. So, I'd have to say "No". If you don't assume any probability distribution, you don't get any measure of information.
 
Namaste & G'day Postulate: A strongly-knit team wins on average over a less knit one Fundamentals: - Two teams face off with 4 players each - A polo team consists of players that each have assigned to them a measure of their ability (called a "Handicap" - 10 is highest, -2 lowest) I attempted to measure close-knitness of a team in terms of standard deviation (SD) of handicaps of the players. Failure: It turns out that, more often than, a team with a higher SD wins. In my language, that...
Hi all, I've been a roulette player for more than 10 years (although I took time off here and there) and it's only now that I'm trying to understand the physics of the game. Basically my strategy in roulette is to divide the wheel roughly into two halves (let's call them A and B). My theory is that in roulette there will invariably be variance. In other words, if A comes up 5 times in a row, B will be due to come up soon. However I have been proven wrong many times, and I have seen some...
Back
Top