Information loss as a grid is coarsened

  • Context: Graduate 
  • Thread starter Thread starter wvguy8258
  • Start date Start date
  • Tags Tags
    Grid Information Loss
Click For Summary

Discussion Overview

The discussion revolves around the concept of information loss when coarsening a grid of binary values (0s and 1s) from a 4x4 configuration to a 2x2 configuration. Participants explore how to quantify the information lost during this aggregation process and the implications for modeling land use change using satellite imagery.

Discussion Character

  • Exploratory, Technical explanation, Debate/contested

Main Points Raised

  • One participant describes a scenario where a 4x4 grid of binary values is coarsened to a 2x2 grid, seeking to understand the information lost in this process.
  • Another participant introduces concepts from Information Theory, suggesting that the loss of information depends on the probability distributions of the values in the grid and discusses how to measure entropy before and after aggregation.
  • A subsequent reply questions the measure of information in bits when only binary data is known without assuming an underlying distribution, suggesting that the information might simply be 16 bits.
  • Another participant counters this by stating that without assuming a probability distribution, one cannot derive a meaningful measure of information, indicating that the assumption of equiprobability is necessary for such calculations.

Areas of Agreement / Disagreement

Participants express differing views on how to quantify information loss, with some advocating for the use of probability distributions and others questioning the validity of information measures without such assumptions. The discussion remains unresolved regarding the best approach to quantify information loss in this context.

Contextual Notes

Limitations include the dependence on assumptions about probability distributions and the subjective nature of defining information in this context. The discussion does not resolve how to approach these assumptions or their implications for measuring information loss.

wvguy8258
Messages
48
Reaction score
0
Hi,

Not sure if this is the correct sub-forum or not. Perhaps, general math is better. Anyways..

In the following, a simple reference covering what I am after would be very helpful.

Let's say you have a 4x4 grid of cells each cell contains either a 1 or 0. Let's say it is this.

0101
1010
0101
1010


And it covers a certain spatial area of let's say 4 m X 4, so the resolution of each cell is 1 by 1 m.

If I coarsen the resolution of the grid so that it is now a 2X2 grid covering the same area then I will take the average value for each of the four cells collapsed by aggregation and assign the average value to the new cells. So we have

0.5 0.5
0.5 0.5

Is there a way to capture the information lost in this aggregation? I suppose it could be thought of as the aggregate grid being known and then the original 4x4 grid being a signal and determining how much information is in the original grid given that you know the aggregate values.

Further, if I attempted to estimate the values in the 2x2 grid using some method and came up with

0.4 0.6
0.3 0.2

Is there a way to determine the amount of information held in the 2x2 grid of 0.5 values given that we know the estimate? I suppose this is the amount of "surprise" in the values of the 2x2 grid given the model.

The reason I am asking this. I model land use change using satellite imagery to determine land cover and then try to predict locations of change by using information on things that likely influence land cover change (like road location, topographic slope, etc). You can often increase model accuracy by aggregating the satellite imagery. So, you gain predictive ability but on a data set where information has been lost. I am trying to better understand this trade-off so that recommendations can be made regarding the appropriate level of data coarsening.

Thanks for reading,

Seth
 
Physics news on Phys.org
The discipline of "Information Theory" uses methods that assign a measure of information to situations involving probability. For example, on a map where the most likely values in your 4x4 grid were all zeroes then the reduction to a 2x2 grid involves less of a loss of information than on another map where 1's and 0's happen with equal frequency. If you want a measure of information based on Information Theory, you need to make assumptions about probabilities.

Gain and loss of information will depend on what probability distributions you use as your "before" and "after" cases. Defining a probability distribution includes defining the random variable(s) that it involves. For example, you might regard your 4x4 matrix of data as a being generated by an even finer grid or even by a spatially continuous randoms variables. The Entropy of this underlying distribution can be calculated. Given a particular 4x4 matrix the conditional probability distribution for the underlying data given that matrix will typically have less Entropy than the unconditional distribution. If we average this Entropy over all 4x4 matrices (weighted by their probability of occurrence) then we get the average entropy of the various conditional distributions. The difference of between this average entropy and the entropy of the unconditional distribution is a measure of how much Entropy loss (= gain in certainty) we get from have the 4x4 matrix data.

A similar calculation can be done for the 2x2 matrices.

You might want to avoid defining an underlying probability distribution for the data and only define a probability distribution on the 4x4 matrices. Thus you assume that knowing the 4x4 matrix is "knowing everything" so knowing it reduces Entropy to zero. You can calculate the average entropy of conditional distributions given the various 2x2 matrices and call that entropy, the gain in entropy ( = increase in uncertainty) from summarizing the data in 2x2 form.

This shows that there are subjective aspects involved in defining information.

This is good and inexpensive book on the subject: "An Introduction to Information Theory" [Paperback]
by Fazlollah M. Reza
 
Last edited:
Thank you for your detailed response.

If the only thing you know of the original data is that it is binary (0/1) and do not assume any underlying distribution, then would the information in bit just be 16?
 
wvguy8258 said:
If the only thing you know of the original data is that it is binary (0/1) and do not assume any underlying distribution, then would the information in bit just be 16?

I think the numbers that people give for "information in a bit" are based on the assumption that 0 and 1 are equiprobable. So, I'd have to say "No". If you don't assume any probability distribution, you don't get any measure of information.
 

Similar threads

  • · Replies 10 ·
Replies
10
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
Replies
2
Views
2K
  • · Replies 13 ·
Replies
13
Views
2K
Replies
7
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 56 ·
2
Replies
56
Views
8K