Infinite Distribution: Generating Unbounded Real Numbers

In summary, the conversation discusses the concept of generating a distribution of an infinite number of samples for an unbounded real number. The results of different methods of doing so are distinct, and the conversation delves into the mathematical formalism for this concept. It is suggested that studying Measure Theory may provide the answer. The conversation also touches on the idea of continuously sampling and the probability of specific values in a continuous distribution.
  • #1
craigi
615
36
It seems to me that if we consider different methods of generating a distribution of an infinite number of samples of and unbounded real number then we get some distinct results.

1) If we randomly sample a value, then its probability must be non-zero, which is also true of any other value.

2) We pick a random value then iteratively increase and decrease this value by a finite, random amount.

3) We take repeated random samples, resulting infinite spacing.

Obviously I'm not the first to think of this so my question is, what is the mathematical formalism for this? What do I look for to find out more?
 
Last edited:
Mathematics news on Phys.org
  • #2
Some observations:

1.1 a continuous distribution can still, in principle, get specific values.
The probability of some exact value is 1 even though the probability of any particular exact value is zero.
In real-world measurements, though, continuous distributions are more an idealization - you cannot cut arbitrarily fine.

1.2 a non zero probability of some value does not mean that any other values are non zero. Unless by "unbounded real number" you mean that the probabability density is non-zero for all values but then that would be a tautology.

2. As in a stochastic process? Random walks or brownian motion?

3. The number of samples does not dictate the spacing of the results. The resolution of the sampling process does that. Idealistically though, this would produce a sum of delta functions.

The usual way to demonstrate a continuous random variable's probability density function if to take many samples in groups and construct a histogram. The more samples, the narrower the bars in the histogram can be. Keep this up and a smooth shape may emerge. This would be the "central limit theorem".

So it looks to me like you want a text on stochastic processes maybe - or just an introductory stats book.
 
  • #3
Simon Bridge said:
Some observations:

1.1 a continuous distribution can still, in principle, get specific values.
The probability of some exact value is 1 even though the probability of any particular exact value is zero.
In real-world measurements, though, continuous distributions are more an idealization - you cannot cut arbitrarily fine.

1.2 a non zero probability of some value does not mean that any other values are non zero. Unless by "unbounded real number" you mean that the probabability density is non-zero for all values but then that would be a tautology.

2. As in a stochastic process? Random walks or brownian motion?

3. The number of samples does not dictate the spacing of the results. The resolution of the sampling process does that. Idealistically though, this would produce a sum of delta functions.

The usual way to demonstrate a continuous random variable's probability density function if to take many samples in groups and construct a histogram. The more samples, the narrower the bars in the histogram can be. Keep this up and a smooth shape may emerge. This would be the "central limit theorem".

So it looks to me like you want a text on stochastic processes maybe - or just an introductory stats book.

I'm talking about randomly sampling a value for an unbounded real number and by unbounded I mean that it can take any value. I'm not specifying an aprori probability density distribution but the important point is that it doesn't converge to zero as we approach +/- infinity. I doubt that any such problem is covered in introductory statistics or stochastics processes texts, but if you can prove me wrong then you've found exactly what I'm looking for.

Regarding your observations:

1.1) I understand your point but I don't think we can say the probability of sampling something that we have actually sampled is zero. If we have sampled it, there exists a chance however small that we could sample it again. It's infinitesmial, but non-zero.

1.2) Perhaps I should have specified that I'm talking about a situation where all values are possible and I'm just offering a symmetry argument, in that if one value has non-zero probability then all values have non-zero probability.

2) It just a sampling scheme to generate an infinite number of samples. I guess we can call it a unidirectional random walk in each direction.

3) I'm just saying that any 2 numbers randomly sampled from all real numbers will differ by an infinite amount.

The whole concept of probability gets a bit sketchy, but I think I've avoided making an incorrect statement. I'm sure there exists a more formal treatment of this, I just don't know where to start looking for it.
 
Last edited:
  • #4
I think I've found the answer.

Measure Theory seems to be what I'm looking for.
 
Last edited:
  • #5
craigi said:
I'm talking about randomly sampling a value for an unbounded real number and by unbounded I mean that it can take any value. I'm not specifying an "a priori" probability density distribution but the important point is that it doesn't converge to zero as we approach +/- infinity.
Just for future reference: when you are trying to describe something where you don't know the "correct" words, you are just learning, it is important to be specific. i.e. as what goes to infinity? As what doesn't converge? You were vague there.
People trying to help you will be aware of many more possibilities than you are so you need to help narrow it down.

While you have not specified any particular probability density distribution you are specifying what kind you are interested in. i.e. It is a probability density, it is continuous (because you wanted any real number value to be possible), and, it appears, that it is non-zero for all real number values.

So you have pdf: ##p(x): x\in (-\infty, \infty)## and $$P(a<x<b)=\int_a^b p(x)\;\text{d}x$$
That was pretty much what I was working from when I was trying to understand what you wanted.

I doubt that any such problem is covered in introductory statistics or stochastics processes texts, but if you can prove me wrong then you've found exactly what I'm looking for.
So noted - the main problem is trying to describe to me what you are thinking about without actually knowing the "right" words - this means there will be a certain about of cross-talk until we are on the same page.

Regarding your observations:

1.1) I understand your point but I don't think we can say the probability of sampling something that we have actually sampled is zero. If we have sampled it, there exists a chance however small that we could sample it again. It's infinitesmial, but non-zero.
This is simply incorrect - a probability density function can only tell you the probability of getting a value that is between two other values. This material is covered in introductory probability and stats courses.

Since you insisted on a pdf as above, then the probability that x=a (some specific real number) is given by: $$P(x=a)=\int_a^a p(x)\;\text{d}x=0$$
In practice, a real life measurement always has some uncertainty - i.e. there are a range of possible values that will give the same measurement (eg. from rounding off).

I know it seems logical that, if you got some value off a random sample, then there must be a non-zero probability of getting it again, but that is not how probability, or measurement, works.
There is a non-zero probability of getting very close to that value - but the probability of getting exactly that value is zero. You may get so close to the initial value that your equipment is unable to tell the difference.

This is where I started to think you may need some theory about how measurements happen rather than theories about how probability density functions may be set up or modeled. However - it is very unusual for a real measurement to be able to take on any real number value at all ... there is always some sort of restriction determined by the type of thing being measured.

1.2) Perhaps I should have specified that I'm talking about a situation where all values are possible and I'm just offering a symmetry argument, in that if one value has non-zero probability then all values have non-zero probability.
Still does not follow - but if you restrict yourself to those situations where the distribution s non-zero for all real numbers then the point is moot.
But you cannot deduce the type of distribution from a single measured value.

2) It just a sampling scheme to generate an infinite number of samples. I guess we can call it a unidirectional random walk in each direction.
A statistical sample typically has more than one value in it - known as a data point. You would typically want more than one sample to investigate how a specific random variable is distributed. Thus it is not clear what you mean by that statement ... that's OK: this is just me trying to teach you how to talk to statisticians.

Anyway - a random walk could be used to generate a single set of N>0 random numbers ... a single sample, size N ... as follows:

let ##X = \{x_n\}## where each ##x_n=\sum_{i=1}^M s_i## is the result of m>0 random steps from the origin. The ith step size ##s\in \{s_i\}\sim \text{H}(-1,1)## is distributed according to a top hat defined as: $$X\sim \text{H}(a,b) \implies p(x)=\begin{cases} \frac{1}{b-a} &: a<x<b\\ 0 &: \text{else}
\end{cases}$$ ... this would be a stochastic process.

3) I'm just saying that any 2 numbers randomly sampled from all real numbers will differ by an infinite amount.
Why would that be so?
If you have two real numbers, then they must differ by a finite amount.

craigi said:
I think I've found the answer.

Measure Theory seems to be what I'm looking for.
Fair enough. This post basically pre-empts all of above.

It looks to me like you are shaky on the foundations of probability and statistics though - you are certainly writing as if you are not used to the topics - but that may be due to uncertainty about what you are asking about too.

I wrote all that so you may have better luck with questions here in the future.
Have fun ;)
 
Last edited:
  • #6
craigi said:
I think I've found the answer.

Measure Theory seems to be what I'm looking for.



Your terminology is confusing. By "an infinite distribution of samples", you may be referring to a probability distribution over an infinite number of values, each of which represents a possible realization ("sample value") of a random variable X.

Measure theory side steps questions about actually taking random samples. The results from measure theory (and sophisticated statements of probability theory) don't assert that you can actually take random samples. They always say things like "If we take..." or "If we have.." or "Let S be a random sample...". There is no explicit assumption that you can actually take a random sample that has a precise value - such as taking a random sample from a normal distribution with mean 0 and variance 1. The results and definitions only tell you what happens if you could take such a sample.

The study of how to approximate taking a random sample from a continuous probability distribution using computer algorithms falls under the heading of "numerical analysis", which is a fairly practical sort of mathematics - as opposed to measure theory, which is abstract.

The assumption that you have a random sample from a continuous distribution such as a normal distribution assumes than an event with probability 0 has occurred. This makes a distinction between an event with 0 probability versus what is called in common speech "an impossible event". For measure theory, this is not a paradox because measure theory doesn't study the physical possibility or impossibility of events. There are no definitions or theorems in measure theory that say anything about physical impossibility. Since measure theory makes no comment on physical impossibility, interpreting a statement from measure theory as a statement about certainty or impossibility in the physical world depends on a person's beliefs and tastes.
 

1. What is an infinite distribution?

An infinite distribution is a mathematical concept used in probability theory to describe the probability of obtaining a particular value in a continuous probability distribution. It represents the likelihood of an unbounded real number occurring in a set of infinite possibilities.

2. How is an infinite distribution different from a finite distribution?

An infinite distribution differs from a finite distribution in that it deals with the probability of obtaining a value in a continuous range, rather than a discrete set of values. In a finite distribution, the probability of obtaining a particular value is non-zero, while in an infinite distribution, the probability of obtaining a specific value is zero.

3. What are some examples of infinite distributions?

Some examples of infinite distributions include the normal distribution, the exponential distribution, and the Pareto distribution. These distributions can model a wide range of real-world phenomena, such as the heights of a population, the time between events, and the distribution of wealth.

4. How is an infinite distribution calculated?

The calculation of an infinite distribution is based on the principles of calculus, specifically the area under a curve. The probability of obtaining a particular value in an infinite distribution is equal to the area under the curve between that value and the end of the distribution. This can be calculated using integrals and mathematical functions.

5. What is the significance of infinite distributions in science?

Infinite distributions play a crucial role in various scientific fields, such as physics, economics, and biology. They allow for the modeling and understanding of complex systems and phenomena that cannot be accurately described by finite distributions. They also provide a framework for making predictions and analyzing data in a continuous and unbounded context.

Similar threads

Replies
1
Views
762
Replies
20
Views
1K
  • General Math
Replies
5
Views
826
Replies
1
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
407
Replies
2
Views
1K
  • Set Theory, Logic, Probability, Statistics
Replies
7
Views
427
  • Set Theory, Logic, Probability, Statistics
Replies
13
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
6
Views
1K
  • Programming and Computer Science
Replies
1
Views
616
Back
Top