pleasehelpme said:
I'm doing my best. I'll try to solve that, but I can't tell if for the upper limit it says e^2x or e^2(pi). I will try it. This is an advanced work sheet and I'm just trying to see if I can get through the steps. I know that later on I will need to understand the Fundamental Theorems better, but for now we're just supposed to see if we can slowly use the steps to figure out the answers. (What our professor said, although I get that we should understand them better and not worry about the answers). And I have tried so far to understand the two theorems, but I just know some strict formulas for them, and I'm having trouble using those when something looks a little different in the actual problem.
I believe you are doing your best. What I am saying underscores the difference between "using a formula" and "knowing what it means". If you can do the second, you can do the first, but often not the other way around. You see, when using things in some other context than in the classroom (for example, trying to find an actual area of some actual curve, given by some real function, because, maybe you need to know to determine if your new manufacturing process will bankrupt you) things usually don't come in "the exact form" we need them in, so one has to get better at "manipulating things".
As a side note, the upper limit is $e^{2x}$. Your answer shouldn't be an approximation, but rather "an exact form" (so you can use square root symbols, and irrational numbers in general, don't try to simplify to "decimals").
I'm not exactly sure what you mean by "the first rule" and "the second rule".
The Fundamental Theorem of Calculus comes in "two flavors".
One is:
Let $f:[a,b] \to \Bbb R$ be a continuous function (this means continuous on all of the "inside" $(a,b)$, with "one-sided continuity" at the endpoints).
Then, for $x \in (a,b)$ (an "inside" point), if we define:
$\displaystyle F(x) = \int_a^x f(t)\ dt$
the function $F$ is differentiable on $(a,b)$ with:
$F'(x) = f(x)$.
(a function $G$ such that $G'(x) = f(x)$ is called an "anti-derivative" for $f$. One caution: anti-deriviatives are not unique. For example, $x^2 + 2$ and $x^2$ are both anti-derivatives of $2x$).
Like any worth-while theorem, it can be proven, but like any worth-while theorem, the proof is harder than using it (that's why we have theorems, to avoid "proving" every little thing we do from scratch). Wikipedia has a good "pictoral" explanation of why it works:
File:FTC geometric2.png - Wikipedia, the free encyclopedia
The "other flavor" is more useful for definite integrals:
If $F(x)$ is an anti-derivative for $f(x)$ on the interval $[a,b]$, then:
$\displaystyle \int_a^b f(x)\ dx = F(b) - F(a)$.
The problem here (in both your original problem, and the second problem I posed for you) is that the upper limit isn't "just $x$", it's a function of $x$.
So if we call that function $u(x)$, the Fundamental Theorem (the first one, not the second) tells us:
$F'(u) = f(u)$
However, $h$ is a function of $x$, not of $u$. If we want to measure how fast $h$ changes with respect to $x$, we have to account for how fast $u$ changes with respect to $x$ (it changes twice as fast, the -1 just tracks "which values of $x$ it starts from").
In your particular problem, you "get lucky" because $f(4) = 0$, so the multiplying factor doesn't "trip you up". I designed MY problem so no such coincidence would occur.
********
I suspect what you mean by "the first rule" is just using what I am calling "the second theorem" and just substituting in $x$ for $b$. Let me see if I can make clear the difference.
In the first version, $F(x)$ is a function, and we are talking about the FUNCTION we get when we differentiate the function:
$\displaystyle F(x) = \int_a^x f(t)\ dt$
that is: "$x$ is a variable".
In the second version, $F(b) - F(a)$ is a NUMBER, which represents the area under the curve $f(t)$ from $t = a$ to $t = b$. Here, $b$ is a CONSTANT.
To prove the first from the second, you would need to show that:
1) the function $F(x)$ is indeed a function, as defined.
2) that $F$ is continuous, and differentiable on $(a,b)$.
The trouble is, you need to do this for every single possible anti-derivative of $f$ (and there are infinitely many), and every single point in the interval $(a,b)$ (of which there are ALSO infinitely many). That's a lot harder than it might seem, at first glance. And you can't "mix up different anti-derivatives" because you lose continuity.
It may be that "your first rule" is for deriving values for:
$\displaystyle k(x) = \int_{g(x)}^{h(x)} f(t)\ dt$
in which case the formula you give:
$k'(x) = f'(h(x))(h'(x)) - f'(g(x))(g'(x))$ is indeed correct, and has the chain rule built into it.
Note that if $g(x) = a$ (a constant function), the second term you're subtracting goes away since it has 0 derivative.
*************
I can also understand that your professor may well take the approach "learn how to use the formulas first" and that "you'll learn the (proof) mechanics later" (in a more advanced class, perhaps, or perhaps later in the course). It is perhaps the case that he/she is teaching non-math-majors as well as math majors, and wants to impart some useful information without getting bogged down in theory. I take this as a concession of failure on the professor's part, in the interests of expediency.
It is perhaps understandable, a course has only so long to spend on any given topic, and not all students in the class need to know the "theory" to use the benefits (you don't have to know how a calculator works to use it, either). But in "real life" (in math, as in other things) theory pwns examples. Knowing how to fish, and having once caught a fish, are two different things.