Is dx ever truly equal to zero in integration theory?

  • Thread starter iScience
  • Start date
In summary: Otherwise, dx is undefined.3.)does 2xdx=0?No. 2xdx is undefined when dx is undefined. 4.)the probability of picking 100 in the range of all natural numbers is zero?Clearly not. There is no uniform probability distribution on the natural numbers.
  • #1
iScience
466
5
1.)does an infinite number of zeroes summed up equal a finite value?

2.)does dx=0?

3.)does 2xdx=0?

4.)the probability of picking 100 in the range of all natural numbers is zero?

how can the probability be zero? shouldn't the probability of this be dN? ie.. an infinitesimally small chance? i just don't see how this can be zero.. say you pick a natural number Ni, and you say that this number has a zero probability of being picked, and then you pick any number in the range of natural numbers. One thing for certain is that you WILL pick a number.. so then you end up with Nii, but someone i know says that all the natural numbers have a probability of zero of being picked. but we know we just picked one... so if the probability was truly zero.. why and how did i just pick Nii?

how can the answer to number 3 be yes? that would imply that every differential equation e.g: (x+y)dx(x-y)dy=0 would just be zero because (x+y)dx that just equals zero... (x-y)dy that just equals zero... no work to be done...
no reason for naming the damn things... homogeneous odes... exact equations...

and if 2xdx=0 you're summing up a bunch of zeros how can this EVER reach a finite value other than zero regardless of how many times you're summing it up?
 
Physics news on Phys.org
  • #2
iScience said:
1.)does an infinite number of zeroes summed up equal a finite value?

Define "summing an infinite number of zeroes". Are you talking about a series with terms all 0? Then we have that ##0 + 0 + 0 + ... = 0##. Did you have something different in mind?

2.)does dx=0?

3.)does 2xdx=0?

Please define what you mean with ##dx##. In the standard intepretation of differential forms, this is false.

4.)the probability of picking 100 in the range of all natural numbers is zero?

Define "probability". This is a very subtle point. There is no uniform probability distribution on the natural numbers. So when talking about probability of picking a certain number, you need to be very specific on what distribution you're using.
 
Last edited:
  • #3
micromass said:
Define "summing an infinite number of zeroes". Are you talking about a series with terms all 0? Then we have that ##0 + 0 + 0 + ... = 1##.
I cannot agree with my honorable colleague. Clearly ##0 + 0 + 0 + ... = 2##.
 
  • #4
(Just kidding. Obviously ##0 + 0 + 0 + ... = 0##. How could it be anything else?)
 
  • #5
jbunniii said:
I cannot agree with my honorable colleague. Clearly ##0 + 0 + 0 + ... = 2##.

Oh God. What did I write :cry: I corrected it.
 
  • #6
It's not even a full moon yet.
 
  • #7
1) "dx" has no value if you aren't working in the hyperreals. The entire epsilon-delta definitions for limits were constructed to avoid working with infinitesimals as numbers, and for good reason; as it causes unnecessary ambiguity. So no, [itex]dx \neq 0[/itex].

However, infinitesimals are negligible quantities as they stand, that is why they are called "infinitesimals"; this is emphasized in the hyperreal theory with the standard part function. It is also emphasized in the sum for the Riemann integral.

2) Look above. An infinitesimal isn't a number you can work with, but it does behave like a real number under some circumstances, for example in the chain rule where you can cancel infinitesimals. However, as I said above, something like [itex]2x\,dx[/itex] is not a number.

3) Obviously [itex]0+0+0... = 100[/itex]. No, just kidding. If you are interpreting it as the sum [itex]\displaystyle \sum 0[/itex], then obviously it is 0. However, there are other interpretations of this statement. Take the sequence [itex]\displaystyle a_n = \sum^{n}_{k=1} \frac{1}{n} = 1[/itex]. This is a constant sequence, but it can also be interpreted as [itex]0+0+0...[/itex] as [itex]n\to\infty[/itex] from a certain angle of view.

As another example, the Riemann sum for an integral can also be interpreted as an infinite sum of zeroes; because it is the sum of the areas of infinitely many intervals with widths infinitely small. As the width goes to zero, this is also an interpretation of the sum you wrote. You must be more precise.

4) The probability distribution function is important here, but applying the usual distribution function, yes; this is correct. However, this function is not so well defined with infinite sets. The reason these probabilities "add up to 1" is because of the sum sequence I gave in 3). There are other probability distribution functions; but the usual "there is an equal chance for each event to happen" distribution leads to this absurd-ish result.

It would be better to use a distribution function like [itex]P(n) = 2^{-n-1}[/itex]. However, this function would have "unnatural" results; because the probability to pick 0 is 1/2.
 
Last edited:
  • #8
iScience said:
1.)does an infinite number of zeroes summed up equal a finite value?
Yes. ##\displaystyle \sum_{n=1}^{\infty}\left[0\right]=0##, which is finite.
iScience said:
2.)does dx=0?
Only if ##x## is something called a closed form, which is a differential form that has an exterior derivative of 0.
iScience said:
3.)does 2xdx=0?
Let's suppose ##2\neq0## (unless, of course, jbunn knows something we don't. :tongue:). If ##x\neq0## and ##dx\neq0##, then ##2x \, dx \neq 0##.
iScience said:
4.)the probability of picking 100 in the range of all natural numbers is zero?
Yes. This is actually a question of measure theory and sigma algebras.
 
  • #9
Millennial,

3.) why would you say that 0+0+0+0+... = some number other than zero if that was not what you meant? and i don't understand what you're saying when you say sequence and then you give me a sum, then you say it equals 1 which.. i then assume you're talking about the sequence and not the series because the series diverges to infinity.. what was the sum for?..
his is a constant sequence, but it can also be interpreted as 0+0+0... as n→∞ from a certain angle of view.

i don't know what angle to look at that from to end up with what you got..

As another example, the Riemann sum for an integral can also be interpreted as an infinite sum of zeroes; because it is the sum of the areas of infinitely many intervals with widths infinitely small. As the width goes to zero, this is also an interpretation of the sum you wrote. You must be more precise.

"approaches zero" does NOT mean "equals" zero.

4) The probability distribution function is important here, but applying the usual distribution function, yes; this is correct. However, this function is not so well defined with infinite sets. The reason these probabilities "add up to 1" is because of the sum sequence I gave in 3). There are other probability distribution functions; but the usual "there is an equal chance for each event to happen" distribution leads to this absurd-ish result.

why is there being an equal probability for all natural numbers absurd? and before you were telling me that the probability was zero. there is no probability whatsoever for a natural number to be picked at random out of a range of all natural numbers. that's what "zero probability" means in this case.. you said that. do you still stand by that statement? or are you still saying that since all natural numbers have a zero probability of being chosen, therefore they are all equal?

and forgive me but i don't understand what you guys are referring to in this context when you say distribution. why do we need a function to determine the probability of a natural number being picked in a range of all natural numbers? it's not like 1 is 600 times more probable of being picked than 63; it's RANDOM... so again, why do we need a distribution function to determine probability for this?
 
  • #10
iScience said:
"approaches zero" does NOT mean "equals" zero.
The Force is strong in this one. I like him. :biggrin:

Who says the distribution has to have different values for different numbers? Consider a uniform probability distribution.
 
  • #11
Mandelbroth said:
The Force is strong in this one. I like him. :biggrin:

Who says the distribution has to have different values for different numbers? Consider a uniform probability distribution.

There is no uniform probability distribution on the natural numbers.
 
  • #12
micromass said:
There is no uniform probability distribution on the natural numbers.
I'm not suggesting there is. I'm attempting to provide intuition on a distribution in which "1 is not 600 times more likely than 63." :tongue:
 
Last edited:
  • #13
iScience said:
why is there being an equal probability for all natural numbers absurd?
What probability would you assign to each number? Let us call this probability ##p##. If ##p = 0## then the total probability is
$$\sum_{n\in \mathbb{N}} p = \sum_{n\in \mathbb{N}} 0 = 0$$
whereas if ##p > 0## then
$$\sum_{n\in \mathbb{N}} p = \infty$$
But a probability distribution must sum to ##1##, not ##0## or ##\infty##.

By the way, this same fact (a countably infinite sum of a nonnegative constant is either ##0## or ##\infty##) is the key reason we are able to construct sets of real numbers that are not Lebesgue measurable.
 
  • #14
Mandelbroth said:
I'm not suggesting there is. I'm attempting to provide intuition on a distribution in which "1 is not 600 times more likely than 63." :tongue:

In past 8, you said yes to his question. This is false.

And the notion of "closed form" has nothing to do with this thread. The very definition of ##dx## is that it is a function ##dx:\mathbb{R}\rightarrow \mathbb{R}^*## such that ##dx(p):\mathbb{R}\rightarrow \mathbb{R}## is a linear function such that ##dx(p)(h)=h##. So it is not equal to ##0##. More generally, we have that ##df(p)(h) = f^\prime(p) h##, so ##df(p) = f^\prime(p) dx##.
 
  • #15
jbunniii said:
By the way, this same fact (a countably infinite sum of a nonnegative constant is either ##0## or ##\infty##) is the key reason we are able to construct sets of real numbers that are not Lebesgue measurable.
The sets of real numbers you are discussing (I'm guessing the Vitali set?) are not what we are discussing here, correct? I thought all singletons in ##\mathbb{R}## have Lebesgue measure 0, which implies that the probability of their selection when considering all real numbers is 0.

micromass said:
In past 8, you said yes to his question. This is false.
See above.

micromass said:
And the notion of "closed form" has nothing to do with this thread. The very definition of ##dx## is that it is a function ##dx:\mathbb{R}\rightarrow \mathbb{R}^*## such that ##dx(p):\mathbb{R}\rightarrow \mathbb{R}## is a linear function such that ##dx(p)(h)=h##. So it is not equal to ##0##. More generally, we have that ##df(p)(h) = f^\prime(p) h##, so ##df(p) = f^\prime(p) dx##.
This depends on how we define ##x##. ##dx## is the exterior derivative of some form ##x##. For example, if we set ##x=d\beta## (that is, if x is exact), then ##dx=0##.
 
Last edited:
  • #16
Mandelbroth said:
The sets of real numbers you are discussing (I'm guessing the Vitali set?) are not what we are discussing here, correct? I thought all singletons in ##\mathbb{R}## have Lebesgue measure 0, which implies that the probability of their selection when considering all real numbers is 0.

Again, there is no uniform probability distribution on the set of all real numbers. If you want to talk about the "probability of selecting a real number", then you need to specify a specific distribution.
 
  • #17
micromass said:
Again, there is no uniform probability distribution on the set of all real numbers. If you want to talk about the "probability of selecting a real number", then you need to specify a specific distribution.
As far as I understand, my thinking applies to any given distribution on the real numbers, not just a (nonexistent :frown:) uniform one. Consider a Gaussian distribution, for example. The probability of picking a given number is still 0 because singletons have measure 0. The only time I can think this might not apply to an (existing) distribution might be some form of Dirac Delta distribution, but I don't have time to check right now. I'll be happy to discuss this with you privately when I get back from dinner, but I suggest we refocus to the OP's questions.
 
  • #18
Mandelbroth said:
As far as I understand, my thinking applies to any given distribution on the real numbers, not just a (nonexistent :frown:) uniform one. Consider a Gaussian distribution, for example. The probability of picking a given number is still 0 because singletons have measure 0. The only time I can think this might not apply to an (existing) distribution might be some form of Dirac Delta distribution, but I don't have time to check right now. I'll be happy to discuss this with you privately when I get back from dinner, but I suggest we refocus to the OP's questions.

Your thinking applies to distributions we call absolutely continuous. There are many other kinds of distributions on ##\mathbb{R}##.
 
  • #19
micromass said:
In past 8, you said yes to his question. This is false.

"In past 8," is this Irish or autocorrect?
 
  • #20
jedishrfu said:
"In past 8," is this Irish or autocorrect?

An autocorrect function that is using an Irish dictionary. :frown:
 
  • #21
Seriously why is anyone bringing up differential forms? It is clear that the OP's questions lie in regular calculus so why confuse him further? If you want to impress people then there's always the youtube comments section.

As Millenial said, ##dx = 0## is a nonsensical statement if you are working in the reals. Usually, hand wavy physics books will refer to ##dx## as an "infinitesimal amount of such and such" (and I even know of one physics book that takes limits with ##dx## in the real number system!) but don't take that literally if you are working in the reals (which you most likely are).
 
  • #22
Mandelbroth said:
The sets of real numbers you are discussing (I'm guessing the Vitali set?) are not what we are discussing here, correct?
Correct, it was a side note, but a related note, which is why I used the language "by the way."
I thought all singletons in ##\mathbb{R}## have Lebesgue measure 0, which implies that the probability of their selection when considering all real numbers is 0.
Yes, that is correct. However, there is no contradiction here.

We can define a uniform probability distribution on any measurable set ##X## of real numbers with nonzero, finite measure ##m(X)##. To do this, we simply define ##P(A) = m(A) / m(X)## for any measurable subset ##A \subset X##.

If ##m## is Lebesgue measure, then this allows us to define a uniform distribution on any Lebesgue measurable set ##X \subset \mathbb{R}## with ##0 < m(X) < \infty##.

Similarly, if ##m## is counting measure, then this allows us to define a uniform distribution on any set ##X## with a finite, nonzero number of elements.

Unfortunately if ##X## is countably infinite, then neither of these situations will apply: the Lebesgue measure of ##X## is zero (too small), but the counting measure of ##X## is infinite (too large).
 
  • #23
micromass says "there no uniform probability distribution on the natural numbers"
so... is this to say that there is no distribution? i still don't understand what you guys are talking about with the distribution thing. WHAT distribution are you guys talking about? if you are talking about the probability distribution on all natural numbers, which.. is really what it sounds like.. then why aren't the probabilities all equal and infinitesimal? ie.. probability P[itex]\rightarrow[/itex]0 s.t P really is dP. and... dP1=dP2=dP3=... why is this NOT the case?


jbunniii

jbunniii said:
What probability would you assign to each number? Let us call this probability ##p##. If ##p = 0## then the total probability is
$$\sum_{n\in \mathbb{N}} p = \sum_{n\in \mathbb{N}} 0 = 0$$
whereas if ##p > 0## then
$$\sum_{n\in \mathbb{N}} p = \infty$$
But a probability distribution must sum to ##1##, not ##0## or ##\infty##.

By the way, this same fact (a countably infinite sum of a nonnegative constant is either ##0## or ##\infty##) is the key reason we are able to construct sets of real numbers that are not Lebesgue measurable.

okay fine, but if p≠0, and p is not greater than 0, then what is p? p HAS to be something, and i thought this something was dN, or.. dP if you will. and i don't see how [itex]\sum[/itex]dP from some initial value to infinity (sorry i don't know how to write the notations on here) automatically equals infinity. why?...
 
  • #24
sorry guys, i didn't take measure theory or any analysis classes yet so i don't know much of what is being discussed here.
 
  • #25
iScience, the thing is that whenever you talk about probability, you should realize that it has a very well-defined meaning. Not just anything can be a probability. It has to satisfy certain rules (called the Kolmogorov axioms).
One of the rules states that a probability must always be a real number. An infinitesimal quantity is not a real number, by definition. So an infinitesimal cannot be a valid probability.
 
  • #26
iScience said:
okay fine, but if p≠0, and p is not greater than 0, then what is p? p HAS to be something
This is exactly the problem. There is no ##p## that works. This shows that it is impossible to define a uniform distribution on the natural numbers, or on any countable set.
 
  • #27
Your thought of "p has to be something" or "dx has to be something" is the thing that is causing problems here. dx is just a placeholder, it is not a real number. In a certain point of view, you could interpret an infinitesimal like this: [itex]\displaystyle dx = \lim_{h\to 0} x(a+h)-x(a)[/itex], but be careful with this definition. The limit is not to be evaluated here. This definition is meaningful when you take ratios of infinitesimals, for example, it yields the usual definition of the derivative [itex]\displaystyle \frac{dy}{dx} = \lim_{h\to 0} \frac{y(a+h) - y(a)}{h}[/itex]. Still, even with ratios, you have to be careful. An infinitesimal is not meant to be defined this way, it is merely a notation. I am just doing this because you happen to want a solid definition, which is not quite possible.
 
  • #28
I've seen this before...that 0+0+0+...=1.

I have a calc textbook (James Stewart 6th edition), in the chapter problems on infinite sequences and series there is a problem which asks the reader to state what is wrong with the following calculation:

0=0+0+0+...

=(1-1)+(1-1)+(1-1)+...

=1-1+1-1+1-1+...

=1+(-1+1)+(-1+1)+(-1+1)+...

=1+0+0+0+...=1

Therefore, 0=1...spooky. Apparently this monstrosity came from an Italian mathematician Guido Ubaldus.
 
  • #29
AdkinsJr said:
I've seen this before...that 0+0+0+...=1.

I have a calc textbook (James Stewart 6th edition), in the chapter problems on infinite sequences and series there is a problem which asks the reader to state what is wrong with the following calculation:

0=0+0+0+...

=(1-1)+(1-1)+(1-1)+...

=1-1+1-1+1-1+...

=1+(-1+1)+(-1+1)+(-1+1)+...

=1+0+0+0+...=1

Therefore, 0=1...spooky. Apparently this monstrosity came from an Italian mathematician Guido Ubaldus.

Associativity doesn't work in series. The general property only works for absolute convergent series.
 
  • #30
##\Sigma (-1)^{n}## doesn't converge to ##0##. You can't apply the associativity law of addition to an arbitrary infinite series.
 
  • #31
micromass said:
Associativity doesn't work in series. The general property only works for absolute convergent series.

That was my first thought too. It's very funny how they just pull out that 1, you could just as easily show that it is equal to 2 if you wanted. The conclusion in the back of the book is that the series is divergent, which I think is equivalent to what you said.
 
  • #32
AdkinsJr said:
The conclusion in the back of the book is that the series is divergent, which I think is equivalent to what you said.

It's not, though. A series [itex]\Sigma a_n[/itex] is called absolutely convergent if [itex]\Sigma |a_n|[/itex] converges. A counterexample to show that this not being the case doesn't imply divergence (ie. a series doesn't have to be absolutely convergent to converge) is the alternating harmonic series ([itex]a_n=\frac{(-1)^{n+1}}{n}[/itex]; [itex]\Sigma_{n=1}^{\infty}a_n=\ln(2)[/itex]). It is not divergent, but as the harmonic series is, ([itex]|a_n|=\frac{1}{n}[/itex]) it's not absolutely convergent either, and it's instead said that it converges conditionally.

In fact, I believe that Riemann proved a theorem ( I can't remember the exact name of the theorem) that essentially states that there exists a way to rearrange the terms of a conditionally convergent series so that the series can be made to diverge or to conditionally converge to any real number.

EDIT: It seems that it's called Riemann's Rearrangement Theorem.
 
Last edited:
  • #33
Think of the sum:

1+1+1+1+1+1+...=-1/2

This sum is only well defined for ζ(0), but we will assume that it is always true, so we multiply both sides by a real variable 'a'.

From which we obtain:

a+a+a+a+a+...=-a/2

Let a=0

0+0+0+0+...=-0/2
0+0+0+0+...=0

That is the first approach.
 
  • #34
The second approach would be using the infinite geometric series:

1+x+x^2+x^3+x^4+...=1/(1-x), for abs(x)<1

We can subtract 1 from each side:

x+x^2+x^3+x^4+...=x/(x-1)

Let x=0

0+0+0+0+...=0
 
  • #35
The last approach is by adding the sum term by term:

0, 0+0, 0+0+0, 0+0+0+0, ...

We can carry writing this forever, but the answer will never change, thus the limit won't change.

Thus 0+0+0+...=0

We can also try averaging the partial sums, but we'll still end up with 0.
 

Similar threads

  • Calculus
Replies
4
Views
1K
Replies
13
Views
1K
Replies
1
Views
932
  • Calculus
Replies
10
Views
2K
  • Calculus
Replies
11
Views
2K
Replies
16
Views
2K
Replies
6
Views
2K
Replies
14
Views
1K
Replies
24
Views
2K
Back
Top