- #1
Square1
- 143
- 1
Just as talked about in stewart in strategy for integration.
I found notes online that also say:
g(x) >= f(x) >= 0, then you want to prove convergence on g. If f(x) >= g(x) >= 0, then you want to prove divergence on g. I am pretty sure I follow the logic here, but how exactly does one pick g?? I've been working on hours picking g that results in the opposite of required case (ex. divergence for g(x) >= f(x) >= 0) which does not then prove anything about f(x)! :(
I found notes online that also say:
g(x) >= f(x) >= 0, then you want to prove convergence on g. If f(x) >= g(x) >= 0, then you want to prove divergence on g. I am pretty sure I follow the logic here, but how exactly does one pick g?? I've been working on hours picking g that results in the opposite of required case (ex. divergence for g(x) >= f(x) >= 0) which does not then prove anything about f(x)! :(