MHB Taylor's Theorem .... Loring W. Tu, Lemma 1.4 .... ....

  • Thread starter Thread starter Math Amateur
  • Start date Start date
  • Tags Tags
    Theorem
Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading Loring W.Tu's book: "An Introduction to Manifolds" (Second Edition) ...

I need help in order to fully understand the proof of Tu's Lemma 1.4: Taylor's Theorem with Remainder ...

Lemma 1.4 reads as follows:View attachment 8631
View attachment 8632
My questions are as follows:Question 1

In the above text from Tu we read the following:

" ... ... In case $$n = 1$$ and $$p = 0$$, this lemma says that

$$f(x) = f(0) + x g_1(x)$$ ... ... "Now Tu seems to put $$n= 1$$ in the equation in the lemma but does not change $$\mathbb{R}^n$$ to $$\mathbb{R}^1$$ and does not change $$x = (x^1, x^2, \ ... \ ... \ x^n)$$ to $$x = (x^1)$$ ... ... How can this be valid?Question 2In the above text from Tu we read the following:

" ... ... Applying the lemma repeatedly gives $$g_i(x) = g_i(0) + x g_{ i + 1 } (x)$$ ... ... "How exactly does Tu arrive at the above equation ... I take it he puts $$f = g_i$$ and he pits p = 0 ... but how does he get $$x g_{ i + 1 } (x)$$ out of the summation term .. ? ( ... note that it is the i + 1 term in g_{ i + 1 } that I find puzzling ... )
Question 3I must say that generally I am having trouble following the overall 'strategy' of the proof ... can it be summarised as transforming the equations of the lemma into a valid Taylor series ...?

... ... but mind you he only seems to show this for $$p= 0$$?
Hope someone can help ...?

Peter
 

Attachments

  • Tu - 1 -  Lemma 1.4 ... ... PART 1 ... .png
    Tu - 1 - Lemma 1.4 ... ... PART 1 ... .png
    19.9 KB · Views: 119
  • Tu - 2 -  Lemma 1.4 ... ... PART 2 ... .png
    Tu - 2 - Lemma 1.4 ... ... PART 2 ... .png
    16.7 KB · Views: 114
Physics news on Phys.org
Hi Peter,

Before answering your questions directly, I think it is worth noting the difference between when $x^{i}$ represents the $i$th coordinate of the vector $x=(x^{1}, x^{2}, \ldots, x^{n})$ versus when $x^{i}$ is taken to mean the single variable $x$ raised to the $i$th power. In the proof of the lemma it is the former, in the justification of the result after the lemma (e.g., see equation (1.2)), it is the latter.

Peter said:
Question 1
Now Tu seems to put $$n= 1$$ in the equation in the lemma but does not change $$\mathbb{R}^n$$ to $$\mathbb{R}^1$$ and does not change $$x = (x^1, x^2, \ ... \ ... \ x^n)$$ to $$x = (x^1)$$ ... ...

How can this be valid?

The change from $\mathbb{R}^{n}$ to $\mathbb{R}$ has occurred (the superscript $x^{1}$ is dropped and he just writes $x$ when in $\mathbb{R}$) as evidenced by the equation $$f(x)=f(0)+xg_{1}(x).$$ Had he been working in $\mathbb{R}^{n}$ this would instead be written as $$f(x) = f(0) + x^{1}g_{1}(x) + x^{2}g_{2}(x) +\cdots + x^{n}g_{n}(x),$$ where $x=(x^{1}, x^{2},\ldots, x^{n}).$
Peter said:
Question 2
How exactly does Tu arrive at the above equation ... I take it he puts $$f = g_i$$ and he pits p = 0 ... but how does he get $$x g_{ i + 1 } (x)$$ out of the summation term .. ? ( ... note that it is the i + 1 term in g_{ i + 1 } that I find puzzling ... )

You are correct here by taking $f=g_{i}$ and reapplying the lemma in the case of $\mathbb{R}^{1},$ nicely done. There is no summation because, in this case, $g_{i}(x)$ is a function of a single variable. As mentioned above, the powers of $x$ in equation (1.2) are exponents on the single variable $x$ and do not represent coordinates of a vector.

Peter said:
Question 3
I must say that generally I am having trouble following the overall 'strategy' of the proof ... can it be summarised as transforming the equations of the lemma into a valid Taylor series ...?

... ... but mind you he only seems to show this for $$p= 0$$?

I would say that it is the direct application of the lemma to the case where $n=1$, $p=0$ repeated over and over again on the sequence of functions $g_{i}(x)$ to obtain the $i+1$st order Taylor polynomial from single-variable calculus.

Let me know if anything is still unclear.
 
GJA said:
Hi Peter,

Before answering your questions directly, I think it is worth noting the difference between when $x^{i}$ represents the $i$th coordinate of the vector $x=(x^{1}, x^{2}, \ldots, x^{n})$ versus when $x^{i}$ is taken to mean the single variable $x$ raised to the $i$th power. In the proof of the lemma it is the former, in the justification of the result after the lemma (e.g., see equation (1.2)), it is the latter.
The change from $\mathbb{R}^{n}$ to $\mathbb{R}$ has occurred (the superscript $x^{1}$ is dropped and he just writes $x$ when in $\mathbb{R}$) as evidenced by the equation $$f(x)=f(0)+xg_{1}(x).$$ Had he been working in $\mathbb{R}^{n}$ this would instead be written as $$f(x) = f(0) + x^{1}g_{1}(x) + x^{2}g_{2}(x) +\cdots + x^{n}g_{n}(x),$$ where $x=(x^{1}, x^{2},\ldots, x^{n}).$

You are correct here by taking $f=g_{i}$ and reapplying the lemma in the case of $\mathbb{R}^{1},$ nicely done. There is no summation because, in this case, $g_{i}(x)$ is a function of a single variable. As mentioned above, the powers of $x$ in equation (1.2) are exponents on the single variable $x$ and do not represent coordinates of a vector.
I would say that it is the direct application of the lemma to the case where $n=1$, $p=0$ repeated over and over again on the sequence of functions $g_{i}(x)$ to obtain the $i+1$st order Taylor polynomial from single-variable calculus.

Let me know if anything is still unclear.

All clear now, thanks GJA ...

I appreciate your most helpful post ...

Peter
 
Back
Top