Similarity between -/+ weighted distributions

In summary: I'm not sure what you mean. Can you give an example of what you are thinking about?It isn't clear whether you are using "distribution" to describe empirical data or whether you use "distribution" to mean an assumed model for the data. It's hard to imagine such situation. One way to think about it is that my friend Eddie records deviations from a background distribution that he knows. Then Eddie gives me the data without telling me the background distribution. Unless I imagine that somebody knows the background distribution, I don't understand how anyone can measure deviations from it.If all I have is empirical data then I can measure the deviation of each
  • #1
Jarvis323
1,243
986
Suppose that you have +/- elements aggregated into a weighted distribution function that represents some deviation from an unknown background distribution.

What would be a good similarity metric for comparing two such distributions (2D or 3D), if they each represent different perturbations from different backgrounds?

In a simple case where the distribution is known, I was looking into Kullback-Leibler divergence, Earth mover's distance, or Bhattacharyya distance, but I would first need to first consider how to properly extend them to handle this problem (if it makes sense).

Alternatively, I could just integrate their difference.

What do you all think?
 
Physics news on Phys.org
  • #2
Jarvis323 said:
What would be a good similarity metric for comparing two such distributions (2D or 3D), if they each represent different perturbations from different backgrounds?

You must define what you mean by "good" in order to pose a mathematical problem. If you can't articulate a mathematical definition of "good", perhaps you can explain "good" the full context of a real life problem. It is meaningless to speak of "good" or "bad" statistical measures unless it is clear what decisions will be made on the basis of those measures. Can you give an example of what decisions will be made using a similarity measure?

Can you give an example of two different distributions that you want the similarity measure to say are "similar"?

In a simple case where the distribution is known
It isn't clear whether you are using "distribution" to describe empirical data or whether you use "distribution" to mean an assumed model for the data.

Suppose that you have +/- elements aggregated into a weighted distribution function that represents some deviation from an unknown background distribution.

It's hard to imagine such situation. One way to think about it is that my friend Eddie records deviations from a background distribution that he knows. Then Eddie gives me the data without telling me the background distribution. Unless I imagine that somebody knows the background distribution, I don't understand how anyone can measure deviations from it.

If all I have is empirical data then I can measure the deviation of each datum from the sample mean of the data. Is that what you mean by deviations from an unknown background distribution?
 
  • #3
Stephen Tashi said:
You must define what you mean by "good" in order to pose a mathematical problem. If you can't articulate a mathematical definition of "good", perhaps you can explain "good" the full context of a real life problem. It is meaningless to speak of "good" or "bad" statistical measures unless it is clear what decisions will be made on the basis of those measures. Can you give an example of what decisions will be made using a similarity measure?

Can you give an example of two different distributions that you want the similarity measure to say are "similar"?It isn't clear whether you are using "distribution" to describe empirical data or whether you use "distribution" to mean an assumed model for the data.
It's hard to imagine such situation. One way to think about it is that my friend Eddie records deviations from a background distribution that he knows. Then Eddie gives me the data without telling me the background distribution. Unless I imagine that somebody knows the background distribution, I don't understand how anyone can measure deviations from it.

If all I have is empirical data then I can measure the deviation of each datum from the sample mean of the data. Is that what you mean by deviations from an unknown background distribution?

Maybe I can describe more precisely and clarify better. The purpose of the similarity measure is to decide whether two distributions are close enough that they can be merged (aggregated into one distribution).

The distributions are different essentially in that they are conditioned differently. The full joint distribution is 7 dimensional. The 7th dimension is a weight that can be negative or positive, which is used in a computer simulation in order to perturb an evolving background distribution. Storing and processing the full 7D joint distribution, or the evolving background is difficult because of the scale. An analysis is done on the data by binning the particles post-hoc into spatial bins, then binning them into weighted 2D or 3D velocity distributions. We want to further locally aggregate these spatially distributed, weighted 2D velocity distributions to show an overview which can be comprehended by a human being. The shape and scale should at least be preserved.
 
  • #4
Jarvis323 said:
The purpose of the similarity measure is to decide whether two distributions are close enough that they can be merged (aggregated into one distribution).
Your use of the word "distribution" is unclear. There are distributions in the sense of probability distributions. There are "distributions" in the sense of histograms of empircal data. My guess is that you are tallking about histograms of empirical data. My guess that by "aggregating" them, you mean to take two histograms of empircal data and combine them into one histogram. It isn't clear what criteria you want to use to specify that two histograms should be aggregated into one histogram. My guess is that you want a statistic to use in a hypothesis test about whether the two histograms come from the same probability distribution.
The distributions are different essentially in that they are conditioned differently. The full joint distribution is 7 dimensional. The 7th dimension is a weight that can be negative or positive, which is used in a computer simulation in order to perturb an evolving background distribution.

In your field of study, the term "background distribution" may be well known. However, just from the terminology of mathematical statisics it isn't clear whether you are talking about a probability distribution or somethjing computed from empirical data. One possibility is that as time passes, more data is collected and the estimate for some probability distribution is updated. Another possiblity is that the computer program uses a changing background distribution specified by a theory and not updated by the data.

Storing and processing the full 7D joint distribution, or the evolving background is difficult because of the scale. An analysis is done on the data by binning the particles post-hoc into spatial bins, then binning them into weighted 2D or 3D velocity distributions.
I don't know what a "weighted" probability distribution would be versus an "unweighted" one. Can you give an example of each?

We want to further locally aggregate these spatially distributed, weighted 2D velocity distributions to show an overview which can be comprehended by a human being. The shape and scale should at least be preserved.

I don't understand what you mean by "locally" aggregate.

The first step in describing a problem in statistics is to state the format of the data and explain what it means. I'll guess the format and you can comment on it.

Each datum is a vector of 7 numbers. Each number quantifies a different property of something. Since you speak of "spatial binning" and "velocities", I'll assume the data has the format (x,y,z,vx,vy,vz,w) where the (x,y,z) is a spatial location and the v's are velocities. (It isn't clear what the "weight" w is. Presumably not "mass".)
 
  • #5
I think it's quite complicated to explain all of the details (which I only partially understand myself). Maybe my previous explanation wasn't quite right. Also, I was trying to generalize enough that I might get some suggestions that I could evaluate more deeply on my own.

There is a particle distribution function, ##f(x,y,z,v_x,v_y,v_z,t)##.
https://en.wikipedia.org/wiki/Distribution_function

There are marker particles moving around in ##(x,y,z,vx,vy,vz)## that carry time varying weights. Their weights represent a change in the number of real particles near ##(x,y,z,v_x,v_y,v_z)## between time steps. In other words, they evolve the particle distribution function.

The data I have is just a large sample of the particles with their positions, velocities and weights. First we are binning them spatially, then binning their weights in ##(v_x,v_y,v_z)##. So each spatial bin has a weighted velocity histogram (although we could also use a GMM or something like that). These "distributions" represent an approximation of how many particles are leaving or entering the respective regions of phase-space, normalized by phase-space volume (change in particles per unit phase-space volume, by position and velocity).

We want to merge similar spatial bins (to represent the merged bins instead with one histogram). So by locally merging, I mean merging adjacent spatial bins. I'm not completely clear on all of the insights that people will try to extract from the result, except that the shape and features are meaningful for studying the turbulence, self-organization phenomena, and for understanding and diagnosing issues regarding the degradation of simulation accuracy and performance.
 
Last edited:
  • #6
You have greatly clarified the situation. In particular the link to the physical definition of a distribution function is helpful. However, applying statistics to a situation in a logical manner requires applying probability theory. In your data, it is not yet clear how probability enters the picture - if it does at all.

The physical "distribution function" defined in https://en.wikipedia.org/wiki/Distribution_function "gives the number of particles per unit volume in single-particle phase space." That definition says nothing about a probability. If we were to interpret "the number of particles" to mean "the average number of particles" we would give ourselves the option of introducing probability.

I don't yet understand if there is anything probabilistic about the behavior of the marker particles. If we are dealing with data from a deterministic process then defining a measure of whether two parts of it are "similar" is distinct from a concept of "similar" based on the idea that two random samples of data are "similar" if they are generated by the same underlying probability distribution.

You mentioned measuring the difference between two functions by least squares.
Alternatively, I could just integrate their difference.

That idea could be embellished. For example if f1(vx,vy) and f2(vx,vy) are two functions, you could find the way to translate them and rotate them so their least squares difference is as small as possible and define that minimum least squares difference to be the measure of similarity. Whether that makes sense depends on whether rotation and translation have some useful interpretation in the physics of the situation.
 

1. What are weighted distributions?

Weighted distributions are probability distributions where each data point is assigned a specific weight or importance. This weight reflects the relative importance of that data point in the distribution.

2. What does it mean for two distributions to have a similarity in weights?

When two distributions have a similarity in weights, it means that the relative importance or weight assigned to each data point in the two distributions is similar. This can indicate that the two distributions have a similar pattern or shape.

3. How can we measure the similarity between weighted distributions?

The most common way to measure the similarity between weighted distributions is by using statistical measures such as correlation coefficients or distance metrics. These measures compare the weights assigned to each data point in the two distributions and provide a numerical value for their similarity.

4. What factors can affect the similarity between weighted distributions?

The similarity between weighted distributions can be affected by the type of distribution (e.g. normal, skewed, etc.), the number of data points, and the weights assigned to each data point. Additionally, any changes in the data or the weights can also impact the similarity between distributions.

5. How can understanding the similarity between weighted distributions be useful?

Understanding the similarity between weighted distributions can be useful in various fields such as economics, finance, and healthcare. It can help identify patterns and trends in data, make predictions, and inform decision-making processes. For example, in finance, understanding the similarity between weighted stock portfolios can help investors make informed investment decisions.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
15
Views
3K
  • MATLAB, Maple, Mathematica, LaTeX
Replies
6
Views
3K
  • STEM Academic Advising
Replies
13
Views
2K
  • Quantum Interpretations and Foundations
Replies
2
Views
2K
  • Special and General Relativity
Replies
1
Views
1K
  • Beyond the Standard Models
Replies
11
Views
2K
Replies
12
Views
2K
  • Beyond the Standard Models
Replies
1
Views
2K
Replies
2
Views
3K
Back
Top