Equality of integrals VS equality of integrands

  • #1

Main Question or Discussion Point

Does $$\int_{t=0}^{\infty}f(t)dt=\int_{t=0}^{\infty}g(t)dt$$ imply $$f(t)=g(t)$$ ?
 

Answers and Replies

  • #3
PeroK
Science Advisor
Homework Helper
Insights Author
Gold Member
13,788
6,272
Does $$\int_{t=0}^{\infty}f(t)dt=\int_{t=0}^{\infty}g(t)dt$$ imply $$f(t)=g(t)$$ ?
How could that possibly be true?
 
  • #4
etotheipi
Gold Member
2019 Award
2,159
1,147
All you need is that the area under the curve from ##0## to ##\infty## is the same. There are a lot of different functions that satisfy this. E.g.

$$
f(x) = \begin{cases}
0, & x< 0 \\
1, & 0\leq x\leq 1 \\
0, & 1< x
\end{cases}$$ and $$
g(x) = \begin{cases}
0, & x< 1 \\
1, & 1\leq x\leq 2 \\
0, & 2< x
\end{cases}$$
 
  • Like
Likes Delta2, Math_QED, Ahmed Mehedi and 1 other person
  • #5
@PeroK @dRic2
How could that possibly be true?
Is it possible to prove the equality using calculus of variation .... ? Or may be a special case of it ... ? I am not sure though ...
 
  • #6
Math_QED
Science Advisor
Homework Helper
2019 Award
1,663
683
This is false for many different reasons.

For example @etotheipi gave two different functions.

However, you can also do the following: Consider an integrable function ##f## and change its value in one (or finitely many) points. Let the so obtained be function be ##g##. Then clearly ##f\neq g## yet ##\int f = \int g##.

For the advanced reader, you are asking if
$$R[0,\infty[ \to \mathbb{R}: f \mapsto \int_{0}^\infty f$$
is injective. This is a linear functional on the space of Riemann-integrable functions on ##[0, \infty[##, and a linear functional on an infinite dimensional vector space is never injective.

Note that some partial results do hold:

(1) If ##f \geq 0## and ##\int f = 0## then ##f = 0## almost everywhere.

(2) If ##f \geq 0## and ##f## is continuous with ##\int f =0##, then ##f=0## (everywhere).
 
  • Like
Likes PeroK, etotheipi and Ahmed Mehedi
  • #7
PeroK
Science Advisor
Homework Helper
Insights Author
Gold Member
13,788
6,272
@PeroK @dRic2

Is it possible to prove the equality using calculus of variation .... ? Or may be a special case of it ... ? I am not sure though ...
Are you really asking that if:
$$\int_{t=0}^{\infty}f(t)dt= 0$$
Then ##f(t) = 0## everywhere?
 
  • Like
Likes etotheipi
  • #8
etotheipi
Gold Member
2019 Award
2,159
1,147
There are some specific and limited circumstances under which you can equate certain integrands (I don't know whether it's totally rigorous, so perhaps @PeroK or @Math_QED can advise...). For instance, consider the following statement of Gauss' Law: $$\frac{Q}{\epsilon_0} = \int_V \frac{\rho}{\epsilon_0} \, dV = \oint_S \vec{E} \cdot d\vec{S} = \int_V \nabla \cdot \vec{E} \, dV $$ $$\int_V \frac{\rho}{\epsilon_0} \, dV = \int_V \nabla \cdot \vec{E} \, dV $$ Since this holds for any domain ##V##, you may deduce ##\nabla \cdot \vec{E} = \frac{\rho}{\epsilon_0}##.

I would suspect that if ##\int_{a}^{b}f(t)dt=\int_{a}^{b}g(t)dt## for all possible ##a, b##, then you would be able to say ##f(t) = g(t)##. But that's quite different to having fixed limits.
 
  • Like
Likes Delta2, Ahmed Mehedi and PeroK
  • #9
PeroK
Science Advisor
Homework Helper
Insights Author
Gold Member
13,788
6,272
I would suspect that if ##\int_{a}^{b}f(t)dt=\int_{a}^{b}g(t)dt## for all possible ##a, b##, then you would be able to say ##f(t) = g(t)##. But that's quite different to having fixed limits.
If two continuous functions different at a single point, then they differ on an interval. Moreover, if ##f(x_0) > g(x_0)##, then ##f(x) > g(x)## on some interval containing ##x_0##.

This is needed to extract the Euler-Lagrange equations from the calculus of variations.
 
  • Informative
  • Like
Likes hutchphd and etotheipi
  • #10
Math_QED
Science Advisor
Homework Helper
2019 Award
1,663
683
I would suspect that if ##\int_{a}^{b}f(t)dt=\int_{a}^{b}g(t)dt## for all possible ##a, b##, then you would be able to say ##f(t) = g(t)##. But that's quite different to having fixed limits.
This is not quite true (it's true if you ask that ##f,g## are continuous). If they are only Riemann-integrable, best you can get is that they are equal almost everywhere.

Riemann-integral does not see the behaviour at single points, so you can take the example I made earlier in post #6 to get ##\int_a^b f = \int_a^b g## for all ##a,b## yet ##f \neq g##.
 
  • Like
  • Informative
Likes Delta2 and etotheipi
  • #11
Are you really asking that if:
$$\int_{t=0}^{\infty}f(t)dt= 0$$
Then ##f(t) = 0## everywhere?
I am very sorry that I can not make my message clear. To be specific my question goes as follows:

Let us consider the following Lagrangian in the context of an optimization problem:

$$L=B\int_{t=0}^{\infty}e^{-\beta t}\frac{c(t)^{1-\theta}}{1-\theta}dt+\lambda \left[ k(0)+\int_{t=0}^{\infty} e^{-R(t)}e^{(n+g)t}w(t)dt - \int_{t=0}^{\infty} e^{-R(t)}e^{(n+g)t}c(t)dt \right]$$

After taking the first partial derivative of the above Lagrangian with respect to c(t) (and setting it to zero as first order optimality condition) it is written that

$$Be^{-\beta t} c(t)^{-\theta}-\lambda e^{-R(t)}e^{(n+g)t}=0$$

It seems that they take the first partial derivative of the Lagrangian with respect to c(t) and simply ignore the dt from both sides.

How the second line follows from the first line? They said it can be proved using calculus of variation. But, how can I prove it?
 
  • Wow
Likes PeroK
  • #12
I am very sorry that I can not make my message clear. To be specific my question goes as follows:

Let us consider the following Lagrangian in the context of an optimization problem:

$$L=B\int_{t=0}^{\infty}e^{-\beta t}\frac{c(t)^{1-\theta}}{1-\theta}dt+\lambda \left[ k(0)+\int_{t=0}^{\infty} e^{-R(t)}e^{(n+g)t}w(t)dt - \int_{t=0}^{\infty} e^{-R(t)}e^{(n+g)t}c(t)dt \right]$$

After taking the first partial derivative of the above Lagrangian with respect to c(t) (and setting it to zero as first order optimality condition) it is written that

$$Be^{-\beta t} c(t)^{-\theta}-\lambda e^{-R(t)}e^{(n+g)t}=0$$

It seems that they take the first partial derivative of the Lagrangian with respect to c(t) and simply ignore the dt from both sides.

How the second line follows from the first line? They said it can be proved using calculus of variation. But, how can I prove it?
@Math_QED @etotheipi
 
  • #13
dRic2
Gold Member
723
169
That's a functional derivative, not a partial derivative. It's an other story.

If you want a quick recipe, a functional derivative like ##\frac {\delta L} {\delta g}## where ##L =\int h[g]## can be evaluated by trowing away the integral and calculating ##\frac {\partial h[g]} {\partial g}##. Ok don't hate me for this comment.
 
  • Like
Likes Ahmed Mehedi
  • #14
That's a functional derivative, not a partial derivative. It's an other story.

If you want a quick recipe, a functional derivative like ##\frac {\delta L} {\delta g}## where ##L =\int h[g]## can be evaluated by trowing away the integral and calculating ##\frac {\partial h[g]} {\partial g}##. Ok don't hate me for this comment.
May be you are right! Perhaps they were talking about functional derivative and not the partial one. Can you provide me any good quick read regarding functional derivatives?
 
  • #15
etotheipi
Gold Member
2019 Award
2,159
1,147
LOL this question took a turn...

I don't know if I can help from here on, I've only ever skimmed through the first chapter of a calculus of variations textbook.
 
  • Like
Likes Ahmed Mehedi and dRic2
  • #16
LOL this question took a turn...

I don't know if I can help from here on, I've only ever skimmed through the first chapter of a calculus of variations textbook.
HAHA ....... I am yet to see the textbook of calculus of variation ....... Let alone chapter one ......
 
  • #17
dRic2
Gold Member
723
169
Can you provide me any good quick read regarding functional derivatives?
Sorry, I'm not very familiar with functional derivatives. I just worked out some tricks to evaluate them in case of need. All I know comes from pag 54-56 from Lanczos' book on analytical mechanics and from 5 pages of notes by a professor of mine.
 
  • Like
Likes Ahmed Mehedi
  • #18
735
191
Given
$$\int_{0}^{\infty} f(t) dt = \int_{0}^{\infty} g(t) dt$$ the most we can get is :
$$
\int_{0}^{\infty} f(t) dt - \int_{0}^{\infty} g(t) dt = 0 $$
$$\lim_{x\to \infty} \int_{0}^{x} \left[f(t) -g(t) \right] dt = 0 $$
$$\lim_{x \to \infty} \frac{d}{dx} \int_{0}^{x} \left[f(t) - g(t) \right] = 0$$
$$\lim_{x \to \infty} f(x) - g(x) = 0$$
That is to say, functions ##f## and ##g## converge to each other as they approach to infinity.
 
Last edited:
  • Like
  • Skeptical
Likes Delta2, Math_QED, PeroK and 2 others
  • #19
Given
$$\int_{0}^{\infty} f(t) dt = \int_{0}^{\infty} g(t) dt$$ the most we can get is :
$$
\int_{0}^{\infty} f(t) dt - \int_{0}^{\infty} g(t) dt = 0 $$
$$\lim_{x\to \infty} \int_{0}^{x} \left[f(t) -g(t) \right] dt = 0 $$
$$\lim_{x \to \infty} \frac{d}{dx} \int_{0}^{x} \left[f(t) - g(t) \right] = 0$$
$$\lim_{x \to \infty} f(x) - g(x) = 0$$
That is to say, functions ##f## and ##g## converge to each other as they approach to infinity.
A good interpretation!
 
  • Like
Likes Adesh
  • #20
Math_QED
Science Advisor
Homework Helper
2019 Award
1,663
683
Given
$$\int_{0}^{\infty} f(t) dt = \int_{0}^{\infty} g(t) dt$$ the most we can get is :
$$
\int_{0}^{\infty} f(t) dt - \int_{0}^{\infty} g(t) dt = 0 $$
$$\lim_{x\to \infty} \int_{0}^{x} \left[f(t) -g(t) \right] dt = 0 $$
$$\lim_{x \to \infty} \frac{d}{dx} \int_{0}^{x} \left[f(t) - g(t) \right] = 0$$
$$\lim_{x \to \infty} f(x) - g(x) = 0$$
That is to say, functions ##f## and ##g## converge to each other as they approach to infinity.
This is false. Consider ##f=0## and $$g(x) = \begin{cases}1 \quad x \in \mathbb{N} \\ 0 \quad x \notin \mathbb{N}\end{cases}$$
Then $$\int_0^\infty f = 0 = \int_0^\infty g$$ yet $$\lim_{x \to \infty}[ f(x)-g(x)] = -\lim_{x \to \infty} g(x) $$ does not exist.

The flaw happens when you introduce the derivative.
 
Last edited:
  • Informative
Likes etotheipi
  • #21
735
191
This is false. Consider ##f=0## and $$g(x) = \begin{cases}1 \quad x \in \mathbb{N} \\ 0 \quad x \notin \mathbb{N}\end{cases}$$
Then $$\int_0^\infty f = 0 = \int_0^\infty g$$ yet $$\lim_{x \to \infty}[ f(x)-g(x)] = -\lim_{x \to \infty} g(x) $$ does not exist.

The flaw happens when you introduce the derivative.
I assumed the functions to be continuous.
 
  • #22
Math_QED
Science Advisor
Homework Helper
2019 Award
1,663
683
I assumed the functions to be continuous.
Even then you must justify switching the limits involved, which you didn't. I'm pretty sure the statement is even false for continuous functions.
 
  • Like
Likes etotheipi
  • #23
PeroK
Science Advisor
Homework Helper
Insights Author
Gold Member
13,788
6,272
I assumed the functions to be continuous.
It's still false. There are continuous functions that do not converge to ##0## as ##t \rightarrow \infty## yet the integral exists.

It's a good exercise to find one.
 
  • Like
Likes Infrared and Math_QED
  • #24
735
191
Even then you must justify switching the limits involved, which you didn't. I'm pretty sure the statement is even false for continuous functions.
We can do the switching when they are monotone.
 
  • #25
Math_QED
Science Advisor
Homework Helper
2019 Award
1,663
683
We can do the switching when they are monotone.
If you keep throwing in extra assumptions, eventually you will be right yes...
 

Related Threads on Equality of integrals VS equality of integrands

Replies
4
Views
5K
  • Last Post
Replies
2
Views
2K
  • Last Post
Replies
12
Views
2K
  • Last Post
Replies
1
Views
1K
  • Last Post
Replies
8
Views
2K
Replies
5
Views
735
  • Last Post
Replies
2
Views
1K
Replies
3
Views
2K
Replies
4
Views
908
Replies
2
Views
2K
Top