- #1
wvguy8258
- 50
- 0
Hi,
I have a square grid that represents a landscape, each grid cell is forested or non-forested. I am calculating 2 different forest fragmentation metrics. Because there is a finite number of combinations of forest and nonforest cells, there is a finite number of possible values for each metric. It is likely that for one or both metrics, more than one combination of forest/nonforest cells will have the same metric value. It seems any such redundancy decreases the amount of information encoded in a metric. If on a small landscape (few cells), I calculated each pattern metric on all possible landscapes (2^# of cells), I could produce a discrete probability distribution for each metric and calculate entropy for each. Does it make sense to do this and with the result say 'the one with greater entropy contains more information about the landscape'? It seems that if each possible landscape had a unique metric value, then that metric has the maximum amount of information for that size landscape. If a metric always gave the same value, it would have an entropy of zero. If this makes sense, and please be brutal if it doesn't, how could one handle the situation where it is not possible to evaluate each possible combination of forest/nonforest cells (each possible landscape)? This will happen quite quickly as the number of cells increase. Is it possible to estimate entropy for a discrete variable using some math or monte carlo simulation? I've been reading about information theory applications for imaging, but usually they are calculating entropy within a single image based upon gray scale values. Any pertinent literature I'm missing? Also, would it be better to analyze my metric distributions using the usual variance measures instead?
Thanks for reading...Seth
I have a square grid that represents a landscape, each grid cell is forested or non-forested. I am calculating 2 different forest fragmentation metrics. Because there is a finite number of combinations of forest and nonforest cells, there is a finite number of possible values for each metric. It is likely that for one or both metrics, more than one combination of forest/nonforest cells will have the same metric value. It seems any such redundancy decreases the amount of information encoded in a metric. If on a small landscape (few cells), I calculated each pattern metric on all possible landscapes (2^# of cells), I could produce a discrete probability distribution for each metric and calculate entropy for each. Does it make sense to do this and with the result say 'the one with greater entropy contains more information about the landscape'? It seems that if each possible landscape had a unique metric value, then that metric has the maximum amount of information for that size landscape. If a metric always gave the same value, it would have an entropy of zero. If this makes sense, and please be brutal if it doesn't, how could one handle the situation where it is not possible to evaluate each possible combination of forest/nonforest cells (each possible landscape)? This will happen quite quickly as the number of cells increase. Is it possible to estimate entropy for a discrete variable using some math or monte carlo simulation? I've been reading about information theory applications for imaging, but usually they are calculating entropy within a single image based upon gray scale values. Any pertinent literature I'm missing? Also, would it be better to analyze my metric distributions using the usual variance measures instead?
Thanks for reading...Seth