Summing Probabilities: Gaussian vs Uniform Distribution

In summary, we are discussing the probabilities of events in infinite sets E(G) and E(U) under Gaussian and uniform distributions, respectively. For countable sets, the sum of probabilities in E(G) will be 1, while for E(U) it will be 0. This makes it impossible to have a continuous uniform distribution, which is often used in Bayesian inference and in the demonstration of Freiling's axiom of symmetry. However, a discrete uniform distribution is possible for a finite number of events.
  • #1
SW VandeCarr
2,199
81
Consider an infinite set E(G) where the elements are interpreted as independent events assigned probabilities under a Gaussian distribution. It can be shown that the probabilities of all events in E(G) will sum to one. Now consider an infinite set E(U) with the same interpretation under a uniform distribution (every event has equal probability). Given an infinite set, this implies that the probability of any randomly chosen event in E(U) is zero. Is it then true that for any continuous uniform probability distribution, the sum of probabilities will be zero?
 
Physics news on Phys.org
  • #2
Do you mean integral of the probabilities (for the continuous distribution)?
 
  • #3
If your infinite set of independent events (call it A) is countable, then there can be no uniform distribution. Proof: Suppose that each element of A had equal probability. If this is zero, then the sum of all probabilities is zero; if it is nonzero, then the sum of all probabilities is infinite. In either case, you don't have a probability space.

If your set is uncountable, then you can't sum the probabilities of all the events (unless all but countably many have zero probability).
 
  • #4
rochfor1 said:
Do you mean integral of the probabilities (for the continuous distribution)?

I'm actually talking about a random variable that maps from an infinite set of elements termed "events" to a continuous probability space [0,1] according to a Gaussian distribution (G) vs a uniform (U). The events are countable. The sum of the probabilities in E(G) will be '1' at the limit. The open integral of G is '1'. My question was with E(U). It appears that in any mapping of countable events in E(U) under a (continuous)uniform distribution U, the sum of probabilities will be '1' for any finite number of events, but zero for infinite events and the open integral of U is '0'. That is, a continuous uniform distribution does not exist (which is essentially what I conclude from adriank's post).

I asked this because 1) The uniform distribution is often used as the prior assumption in Bayesian inference. 2) In demonstrating Freiling's axiom of symmetry (AX) (see 'maze' "dense geodesic" post 32 Nov 30 in topology), a random variable chooses a number on the real interval [0,1] and maps to an (infinite?) set of countable subsets (S)on the same interval. (S) would be an infinite set of countable elements under a uniform distribution given a complete mapping, which according to the above, would be impossible since the probabilities sum to zero under a distribution whose open integral is zero.
 
Last edited:
  • #5
To clarify the previous post, the probability p(x), given on a continuous distribution G, is expressed as a probability density [0, p(x)] on the definite integral [0,1] of G. I used the term "open" integral to exclude events with probability zero or one.

With respect to the uniform distribution U the same holds for a finite set of possible events 'x'. However, the uniform distribution must be discrete since the probability of one event is fixed as the reciprocal of the total number of events. For an infinite number events, a (continuous) uniform distribution seems impossible unless the reciprocal of infinity can be defined.

The demonstration of Freiling's axiom of symmetry (in the Wiki) seems to contemplate a random variable 'choosing' a number from the real interval [0,1] which implies a continuous uniform distribution since there's no reason to consider anyone number has a different probability than any other to be chosen.
 
Last edited:

1. What is the difference between a Gaussian and a uniform distribution?

A Gaussian distribution, also known as a normal distribution, is a bell-shaped curve that is symmetrical around a central mean value. It is commonly used to represent continuous random variables in nature. A uniform distribution, on the other hand, is a rectangular-shaped curve that is equally likely to occur at any point within a given range. It is often used to represent discrete random variables, such as rolling a die.

2. How do you calculate the sum of probabilities for a Gaussian distribution?

The sum of probabilities for a Gaussian distribution can be calculated by integrating the probability density function (PDF) over a given range. The formula for the PDF is: f(x) = (1/σ√2π) * e^(-(x-μ)^2/(2σ^2)), where μ is the mean and σ is the standard deviation. To find the sum of probabilities, you would integrate this formula over the desired range of values.

3. Can the sum of probabilities for a Gaussian distribution exceed 1?

No, the sum of probabilities for a Gaussian distribution cannot exceed 1. This is because the total area under the curve of a normal distribution is equal to 1. In other words, the probability of all possible outcomes must add up to 1. If the sum of probabilities exceeds 1, it means that there is an error in the calculations.

4. How do you interpret the sum of probabilities for a uniform distribution?

The sum of probabilities for a uniform distribution represents the probability of an event occurring within a given range. For example, if the sum of probabilities is 0.5, it means that there is a 50% chance of the event occurring within that range. In a uniform distribution, all outcomes have an equal chance of occurring, so the sum of probabilities is simply the width of the range divided by the total width of the distribution.

5. Which type of distribution is more commonly used in real-world applications?

The answer to this question depends on the specific application. Gaussian distributions are often used to model natural phenomena, such as height or weight of a population, while uniform distributions are commonly used in scenarios where all outcomes are equally likely, such as in gambling or random sampling. Ultimately, the choice between the two distributions depends on the data being modeled and the goals of the analysis.

Similar threads

  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
13
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
186
  • Set Theory, Logic, Probability, Statistics
Replies
4
Views
884
  • Set Theory, Logic, Probability, Statistics
Replies
11
Views
388
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
Replies
0
Views
262
  • Set Theory, Logic, Probability, Statistics
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
9
Views
3K
Back
Top