- #1
sonofagun
- 16
- 0
I need help with the following theorem:
Let I, J ⊆ℝ be open intervals, let x∈I, let g: I\{x}→ℝ and f: J→ℝ be functions with g[I\{x}]⊆J and Limz→xg(x)=L∈J. Assume that limy→L f(y) exists and that g[I\{x}]⊆J\{g(x)},or, in case g(x)∈g[I\{x}] that limy→L f(y)=f(L). Then f(g(x)) converges at x, and limz→x f(g(x))=limy→Lf(y).
I don't get why the assumptions are necessary. We're assuming that the limit of f exists when g is not defined at x, or, if g is defined at x, that the limy→L f (y)=f(L).
Let I, J ⊆ℝ be open intervals, let x∈I, let g: I\{x}→ℝ and f: J→ℝ be functions with g[I\{x}]⊆J and Limz→xg(x)=L∈J. Assume that limy→L f(y) exists and that g[I\{x}]⊆J\{g(x)},or, in case g(x)∈g[I\{x}] that limy→L f(y)=f(L). Then f(g(x)) converges at x, and limz→x f(g(x))=limy→Lf(y).
I don't get why the assumptions are necessary. We're assuming that the limit of f exists when g is not defined at x, or, if g is defined at x, that the limy→L f (y)=f(L).