Equality of integrals VS equality of integrands

  • Context: Undergrad 
  • Thread starter Thread starter Ahmed Mehedi
  • Start date Start date
  • Tags Tags
    Integrals
Click For Summary

Discussion Overview

The discussion revolves around the implications of the equality of integrals versus the equality of integrands, specifically questioning whether the equality of two integrals over the interval from 0 to infinity implies that the functions being integrated are equal. The scope includes theoretical considerations, mathematical reasoning, and potential applications in calculus of variations.

Discussion Character

  • Debate/contested
  • Mathematical reasoning
  • Conceptual clarification

Main Points Raised

  • Some participants question whether $$\int_{t=0}^{\infty}f(t)dt=\int_{t=0}^{\infty}g(t)dt$$ implies $$f(t)=g(t)$$, suggesting that different functions can yield the same integral.
  • One participant provides examples of functions that have the same integral but are not equal, illustrating that the area under the curve can be the same for different functions.
  • Another participant discusses the injectivity of the mapping from integrable functions to their integrals, noting that a linear functional on an infinite-dimensional space is never injective.
  • Some participants propose that under certain conditions, such as continuity, one might be able to equate integrands if the integrals are equal over all intervals.
  • There is a suggestion that specific cases, such as those involving Gauss' Law, might allow for equating integrands under certain conditions.
  • Concerns are raised about the implications of setting integrals equal to zero and whether this leads to the conclusion that the functions must be zero everywhere.
  • Discussion includes the concept of functional derivatives in the context of optimization problems, with participants seeking clarification on the relationship between derivatives and integrals.
  • Some participants express uncertainty about the rigor of their statements and seek further resources or clarification on functional derivatives.

Areas of Agreement / Disagreement

Participants do not reach a consensus on whether the equality of integrals implies the equality of integrands. Multiple competing views remain, with some arguing for specific conditions under which equality might hold, while others provide counterexamples and express skepticism about the implications.

Contextual Notes

Limitations include the dependence on the continuity of functions and the nature of integrability, as well as the unresolved mathematical steps regarding the implications of equal integrals.

Ahmed Mehedi
Messages
39
Reaction score
6
Does $$\int_{t=0}^{\infty}f(t)dt=\int_{t=0}^{\infty}g(t)dt$$ imply $$f(t)=g(t)$$ ?
 
Physics news on Phys.org
Ahmed Mehedi said:
Does $$\int_{t=0}^{\infty}f(t)dt=\int_{t=0}^{\infty}g(t)dt$$ imply $$f(t)=g(t)$$ ?
How could that possibly be true?
 
All you need is that the area under the curve from ##0## to ##\infty## is the same. There are a lot of different functions that satisfy this. E.g.

$$
f(x) = \begin{cases}
0, & x< 0 \\
1, & 0\leq x\leq 1 \\
0, & 1< x
\end{cases}$$ and $$
g(x) = \begin{cases}
0, & x< 1 \\
1, & 1\leq x\leq 2 \\
0, & 2< x
\end{cases}$$
 
  • Like
Likes   Reactions: Delta2, member 587159, Ahmed Mehedi and 1 other person
@PeroK @dRic2
PeroK said:
How could that possibly be true?
Is it possible to prove the equality using calculus of variation ... ? Or may be a special case of it ... ? I am not sure though ...
 
This is false for many different reasons.

For example @etotheipi gave two different functions.

However, you can also do the following: Consider an integrable function ##f## and change its value in one (or finitely many) points. Let the so obtained be function be ##g##. Then clearly ##f\neq g## yet ##\int f = \int g##.

For the advanced reader, you are asking if
$$R[0,\infty[ \to \mathbb{R}: f \mapsto \int_{0}^\infty f$$
is injective. This is a linear functional on the space of Riemann-integrable functions on ##[0, \infty[##, and a linear functional on an infinite dimensional vector space is never injective.

Note that some partial results do hold:

(1) If ##f \geq 0## and ##\int f = 0## then ##f = 0## almost everywhere.

(2) If ##f \geq 0## and ##f## is continuous with ##\int f =0##, then ##f=0## (everywhere).
 
  • Like
Likes   Reactions: PeroK, etotheipi and Ahmed Mehedi
Ahmed Mehedi said:
@PeroK @dRic2

Is it possible to prove the equality using calculus of variation ... ? Or may be a special case of it ... ? I am not sure though ...
Are you really asking that if:
$$\int_{t=0}^{\infty}f(t)dt= 0$$
Then ##f(t) = 0## everywhere?
 
  • Like
Likes   Reactions: etotheipi
There are some specific and limited circumstances under which you can equate certain integrands (I don't know whether it's totally rigorous, so perhaps @PeroK or @Math_QED can advise...). For instance, consider the following statement of Gauss' Law: $$\frac{Q}{\epsilon_0} = \int_V \frac{\rho}{\epsilon_0} \, dV = \oint_S \vec{E} \cdot d\vec{S} = \int_V \nabla \cdot \vec{E} \, dV $$ $$\int_V \frac{\rho}{\epsilon_0} \, dV = \int_V \nabla \cdot \vec{E} \, dV $$ Since this holds for any domain ##V##, you may deduce ##\nabla \cdot \vec{E} = \frac{\rho}{\epsilon_0}##.

I would suspect that if ##\int_{a}^{b}f(t)dt=\int_{a}^{b}g(t)dt## for all possible ##a, b##, then you would be able to say ##f(t) = g(t)##. But that's quite different to having fixed limits.
 
  • Like
Likes   Reactions: Delta2, Ahmed Mehedi and PeroK
etotheipi said:
I would suspect that if ##\int_{a}^{b}f(t)dt=\int_{a}^{b}g(t)dt## for all possible ##a, b##, then you would be able to say ##f(t) = g(t)##. But that's quite different to having fixed limits.

If two continuous functions different at a single point, then they differ on an interval. Moreover, if ##f(x_0) > g(x_0)##, then ##f(x) > g(x)## on some interval containing ##x_0##.

This is needed to extract the Euler-Lagrange equations from the calculus of variations.
 
  • Informative
  • Like
Likes   Reactions: hutchphd and etotheipi
  • #10
etotheipi said:
I would suspect that if ##\int_{a}^{b}f(t)dt=\int_{a}^{b}g(t)dt## for all possible ##a, b##, then you would be able to say ##f(t) = g(t)##. But that's quite different to having fixed limits.

This is not quite true (it's true if you ask that ##f,g## are continuous). If they are only Riemann-integrable, best you can get is that they are equal almost everywhere.

Riemann-integral does not see the behaviour at single points, so you can take the example I made earlier in post #6 to get ##\int_a^b f = \int_a^b g## for all ##a,b## yet ##f \neq g##.
 
  • Like
  • Informative
Likes   Reactions: Delta2 and etotheipi
  • #11
PeroK said:
Are you really asking that if:
$$\int_{t=0}^{\infty}f(t)dt= 0$$
Then ##f(t) = 0## everywhere?

I am very sorry that I can not make my message clear. To be specific my question goes as follows:

Let us consider the following Lagrangian in the context of an optimization problem:

$$L=B\int_{t=0}^{\infty}e^{-\beta t}\frac{c(t)^{1-\theta}}{1-\theta}dt+\lambda \left[ k(0)+\int_{t=0}^{\infty} e^{-R(t)}e^{(n+g)t}w(t)dt - \int_{t=0}^{\infty} e^{-R(t)}e^{(n+g)t}c(t)dt \right]$$

After taking the first partial derivative of the above Lagrangian with respect to c(t) (and setting it to zero as first order optimality condition) it is written that

$$Be^{-\beta t} c(t)^{-\theta}-\lambda e^{-R(t)}e^{(n+g)t}=0$$

It seems that they take the first partial derivative of the Lagrangian with respect to c(t) and simply ignore the dt from both sides.

How the second line follows from the first line? They said it can be proved using calculus of variation. But, how can I prove it?
 
  • Wow
Likes   Reactions: PeroK
  • #12
Ahmed Mehedi said:
I am very sorry that I can not make my message clear. To be specific my question goes as follows:

Let us consider the following Lagrangian in the context of an optimization problem:

$$L=B\int_{t=0}^{\infty}e^{-\beta t}\frac{c(t)^{1-\theta}}{1-\theta}dt+\lambda \left[ k(0)+\int_{t=0}^{\infty} e^{-R(t)}e^{(n+g)t}w(t)dt - \int_{t=0}^{\infty} e^{-R(t)}e^{(n+g)t}c(t)dt \right]$$

After taking the first partial derivative of the above Lagrangian with respect to c(t) (and setting it to zero as first order optimality condition) it is written that

$$Be^{-\beta t} c(t)^{-\theta}-\lambda e^{-R(t)}e^{(n+g)t}=0$$

It seems that they take the first partial derivative of the Lagrangian with respect to c(t) and simply ignore the dt from both sides.

How the second line follows from the first line? They said it can be proved using calculus of variation. But, how can I prove it?

@Math_QED @etotheipi
 
  • #13
That's a functional derivative, not a partial derivative. It's an other story.

If you want a quick recipe, a functional derivative like ##\frac {\delta L} {\delta g}## where ##L =\int h[g]## can be evaluated by trowing away the integral and calculating ##\frac {\partial h[g]} {\partial g}##. Ok don't hate me for this comment.
 
  • Like
Likes   Reactions: Ahmed Mehedi
  • #14
dRic2 said:
That's a functional derivative, not a partial derivative. It's an other story.

If you want a quick recipe, a functional derivative like ##\frac {\delta L} {\delta g}## where ##L =\int h[g]## can be evaluated by trowing away the integral and calculating ##\frac {\partial h[g]} {\partial g}##. Ok don't hate me for this comment.

May be you are right! Perhaps they were talking about functional derivative and not the partial one. Can you provide me any good quick read regarding functional derivatives?
 
  • #15
LOL this question took a turn...

I don't know if I can help from here on, I've only ever skimmed through the first chapter of a calculus of variations textbook.
 
  • Like
Likes   Reactions: Ahmed Mehedi and dRic2
  • #16
etotheipi said:
LOL this question took a turn...

I don't know if I can help from here on, I've only ever skimmed through the first chapter of a calculus of variations textbook.

HAHA ... I am yet to see the textbook of calculus of variation ... Let alone chapter one ...
 
  • #17
Ahmed Mehedi said:
Can you provide me any good quick read regarding functional derivatives?
Sorry, I'm not very familiar with functional derivatives. I just worked out some tricks to evaluate them in case of need. All I know comes from pag 54-56 from Lanczos' book on analytical mechanics and from 5 pages of notes by a professor of mine.
 
  • Like
Likes   Reactions: Ahmed Mehedi
  • #18
Given
$$\int_{0}^{\infty} f(t) dt = \int_{0}^{\infty} g(t) dt$$ the most we can get is :
$$
\int_{0}^{\infty} f(t) dt - \int_{0}^{\infty} g(t) dt = 0 $$
$$\lim_{x\to \infty} \int_{0}^{x} \left[f(t) -g(t) \right] dt = 0 $$
$$\lim_{x \to \infty} \frac{d}{dx} \int_{0}^{x} \left[f(t) - g(t) \right] = 0$$
$$\lim_{x \to \infty} f(x) - g(x) = 0$$
That is to say, functions ##f## and ##g## converge to each other as they approach to infinity.
 
Last edited:
  • Like
  • Skeptical
Likes   Reactions: Delta2, member 587159, PeroK and 2 others
  • #19
Adesh said:
Given
$$\int_{0}^{\infty} f(t) dt = \int_{0}^{\infty} g(t) dt$$ the most we can get is :
$$
\int_{0}^{\infty} f(t) dt - \int_{0}^{\infty} g(t) dt = 0 $$
$$\lim_{x\to \infty} \int_{0}^{x} \left[f(t) -g(t) \right] dt = 0 $$
$$\lim_{x \to \infty} \frac{d}{dx} \int_{0}^{x} \left[f(t) - g(t) \right] = 0$$
$$\lim_{x \to \infty} f(x) - g(x) = 0$$
That is to say, functions ##f## and ##g## converge to each other as they approach to infinity.

A good interpretation!
 
  • Like
Likes   Reactions: Adesh
  • #20
Adesh said:
Given
$$\int_{0}^{\infty} f(t) dt = \int_{0}^{\infty} g(t) dt$$ the most we can get is :
$$
\int_{0}^{\infty} f(t) dt - \int_{0}^{\infty} g(t) dt = 0 $$
$$\lim_{x\to \infty} \int_{0}^{x} \left[f(t) -g(t) \right] dt = 0 $$
$$\lim_{x \to \infty} \frac{d}{dx} \int_{0}^{x} \left[f(t) - g(t) \right] = 0$$
$$\lim_{x \to \infty} f(x) - g(x) = 0$$
That is to say, functions ##f## and ##g## converge to each other as they approach to infinity.

This is false. Consider ##f=0## and $$g(x) = \begin{cases}1 \quad x \in \mathbb{N} \\ 0 \quad x \notin \mathbb{N}\end{cases}$$
Then $$\int_0^\infty f = 0 = \int_0^\infty g$$ yet $$\lim_{x \to \infty}[ f(x)-g(x)] = -\lim_{x \to \infty} g(x) $$ does not exist.

The flaw happens when you introduce the derivative.
 
Last edited by a moderator:
  • Informative
Likes   Reactions: etotheipi
  • #21
Math_QED said:
This is false. Consider ##f=0## and $$g(x) = \begin{cases}1 \quad x \in \mathbb{N} \\ 0 \quad x \notin \mathbb{N}\end{cases}$$
Then $$\int_0^\infty f = 0 = \int_0^\infty g$$ yet $$\lim_{x \to \infty}[ f(x)-g(x)] = -\lim_{x \to \infty} g(x) $$ does not exist.

The flaw happens when you introduce the derivative.
I assumed the functions to be continuous.
 
  • #22
Adesh said:
I assumed the functions to be continuous.

Even then you must justify switching the limits involved, which you didn't. I'm pretty sure the statement is even false for continuous functions.
 
  • Like
Likes   Reactions: etotheipi
  • #23
Adesh said:
I assumed the functions to be continuous.
It's still false. There are continuous functions that do not converge to ##0## as ##t \rightarrow \infty## yet the integral exists.

It's a good exercise to find one.
 
  • Like
Likes   Reactions: Infrared and member 587159
  • #24
Math_QED said:
Even then you must justify switching the limits involved, which you didn't. I'm pretty sure the statement is even false for continuous functions.
We can do the switching when they are monotone.
 
  • #25
Adesh said:
We can do the switching when they are monotone.

If you keep throwing in extra assumptions, eventually you will be right yes...
 
  • #26
PeroK said:
There are continuous functions that do not converge to 0 as t→∞ yet the integral exists.
I didn’t say they converge to zero, I said they get closer and closer to each other as ##x## goes to ##\infty##.
 
  • #27
Adesh said:
I didn’t say they converge to zero, I said they get closer and closer to each other as ##x## goes to ##\infty##.
Which is false. The difference of the functions need not converge to ##0##.

Note that if the integral exists and the limit exists, then the limit must be zero. That's true.
 
  • #28
PeroK said:
Which is false. The difference of the functions need not converge to ##0##.
In this case they are coming out be zero, :biggrin: .
 
  • #29
@Infrared Did you study from Rudin by yourself? I mean self-study or was your Univeristy good enough from where you can learned it? (Because as far as universities that I know, you cannot learn subjects which require time, like analysis, because teachers are not very interested in teaching).
 
  • #30
Adesh said:
@Infrared Did you study from Rudin by yourself? I mean self-study or was your Univeristy good enough from where you can learned it? (Because as far as universities that I know, you cannot learn subjects which require time, like analysis, because teachers are not very interested in teaching).

Teachers are only guides. At university, the student is the one that must put in the effort. There are plenty of people that succesfully learned analysis in university, so I'm not sure where your claim comes from.
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 19 ·
Replies
19
Views
5K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 2 ·
Replies
2
Views
1K