Derivative of a Vector-Valued Function of a Real Variable ...

In summary: Differentiability of ##g(x)## at a point ##x=a## according to definition 9.1.1 means$$\lim_{h \to 0} \dfrac{g(a+h)-g(a)}{h} = g\,'(a)$$which means that for all ##\varepsilon > 0## we can find a ##\delta(\varepsilon) > 0## such that - as soon as ##|h|< \delta(\varepsilon)## - we have ##||\dfrac{g(a+h)-g(a)}{h} - g\,'(a)||
  • #1
Math Amateur
Gold Member
MHB
3,990
48
I am reading Hugo D. Junghenn's book: "A Course in Real Analysis" ...

I am currently focused on Chapter 9: "Differentiation on \mathbb{R}^n"

I need some help with the proof of Proposition 9.1.2 ...

Proposition 9.1.2 and the preceding relevant Definition 9.1.1 read as follows:
Junghenn - 1 -  Proposition 9.1.2  ... PART 1 ... .png

Junghenn - 2 -  Proposition 9.1.2  ... PART 2 ... .png

In the above text from Junghenn we read the following:

" ... ... The assertions follow directly from the inequalities

## \left\vert \frac{f_j ( a + h ) - f_j (a)}{ h } - x_j \right\vert^2 \le \left\| \frac{ f( a + h ) - f(a) }{ h } - ( x_1, \ ... \ ... \ , x_m ) \right\|^2####\le \sum_{ i = 1 }^m \left\vert \frac{f_j ( a + h ) - f_j (a)}{ h } - x_j \right\vert^2## ...

... ... "
Can someone please show why the above inequalities hold true ... and further how they lead to the proof of Proposition 9.1.2 ... ...Help will be much appreciated ...

Peter
 

Attachments

  • Junghenn - 1 -  Proposition 9.1.2  ... PART 1 ... .png
    Junghenn - 1 - Proposition 9.1.2 ... PART 1 ... .png
    32.8 KB · Views: 782
  • Junghenn - 2 -  Proposition 9.1.2  ... PART 2 ... .png
    Junghenn - 2 - Proposition 9.1.2 ... PART 2 ... .png
    25.6 KB · Views: 428
Last edited:
Physics news on Phys.org
  • #2
Let ##\mathcal{E}## note the expression in the norm, resp. absolute value (because of laziness, not because it has a certain meaning). If you have all the ##f_j## differentiable, then ##|\mathcal{E}(f_j)|^2 < \frac{1}{n}\varepsilon## in a small neighborhood of ##a## which gives you the estimation for ##||\mathcal{E}(f)||^2## and vice versa.

Now the first question is, why is ##|x_j|^2 \leq ||x||^2 \leq \sum_j |x_j|^2## for any vector ##x=(x_1,\ldots,x_n) \in \mathbb{R}^n##.
Can you prove this?
 
  • Like
Likes Math Amateur
  • #3
fresh_42 said:
Let ##\mathcal{E}## note the expression in the norm, resp. absolute value (because of laziness, not because it has a certain meaning). If you have all the ##f_j## differentiable, then ##|\mathcal{E}(f_j)|^2 < \frac{1}{n}\varepsilon## in a small neighborhood of ##a## which gives you the estimation for ##||\mathcal{E}(f)||^2## and vice versa.

Now the first question is, why is ##|x_j|^2 \leq ||x||^2 \leq \sum_j |x_j|^2## for any vector ##x=(x_1,\ldots,x_n) \in \mathbb{R}^n##.
Can you prove this?
Thanks fresh_42 ...

To prove ##|x_j|^2 \leq ||x||^2 \leq \sum_j |x_j|^2## for any vector ##x=(x_1,\ldots,x_n) \in \mathbb{R}^n##.

... well ... we have ...

##\| x \|^2 = ( x_1^2 + x_2^2 + \ ... \ ... \ + x_m^2 ) ##

... so clearly ...

## \mid x_j \mid^2 \le \| x \|^2 ## ... ... ... ... ... (1)

Now ...

## \sum_j \mid x_j \mid^2 \ = \ \mid x_1 \mid^2 + \mid x_2 \mid^2 + \ ... \ ... \ + \mid x_m \mid^2 \ = \ x_1^2 + x_2^2 + \ ... \ ... \ + x_m^2 ##

hence

## \sum_j \mid x_j \mid^2 = \| x \|^2## ... ... ... ... ... (2)and hence (2) implies

##\| x \|^2 \le \sum_j \mid x_j \mid^2 ## ... ... ... ... ... (3)So ... (1) and (3) imply that ##\mid x_j \mid^2 \le \| x \|^2 \le \sum_j \mid x_j \mid^2## ...
Is that correct?

Peter
 
  • #4
Yes, that's correct, and the other claim is even easier, as we only have to change the order of reasoning. In one direction with the first inequality and in the other with the second, depending on what is given. We only have to use the fact, that the dimension ##n## is a constant, so it doesn't really affect the ##\varepsilon##.
 
  • Like
Likes Math Amateur
  • #5
fresh_42 said:
Yes, that's correct, and the other claim is even easier, as we only have to change the order of reasoning. In one direction with the first inequality and in the other with the second, depending on what is given. We only have to use the fact, that the dimension ##n## is a constant, so it doesn't really affect the ##\varepsilon##.
Thanks for all your help on this matter fresh_42 ...

But I am still struggling to relate what you are saying to differentiation of functions from \mathbb{R} to \mathbb{R} ... that is when yu write:

" ... ... If you have all the ##f_j## differentiable, then ##|\mathcal{E}(f_j)|^2 < \frac{1}{n}\varepsilon## in a small neighborhood of ##a## which gives you the estimation for ##||\mathcal{E}(f)||^2## and vice versa. ... ..."

... how does this arise out of the definition of differentiation of functions from ##\mathbb{R}## to ##\mathbb{R}## ... ? This seems to me to be important since the only definition/discussion of differentiation in Junghenn before the above case of a the derivative of a vector-valued function of a real variable is the case of functions from ##\mathbb{R}## to ##\mathbb{R}## ... ...

Junghenn's introduction to the differentiation of functions from ##\mathbb{R}## to ##\mathbb{R}## reads as follows:
Junghenn - 1 -  Differention on R   ... PART 1 ... .png

Junghenn - 2 -  Differention on R   ... PART 2 ... .png

Junghenn - 3 -  Differention on R   ... PART 3 ... .png

Junghenn - 4 -  Differention on R   ... PART 4 ... .png


Peter
 

Attachments

  • Junghenn - 1 -  Differention on R   ... PART 1 ... .png
    Junghenn - 1 - Differention on R ... PART 1 ... .png
    24.8 KB · Views: 413
  • Junghenn - 2 -  Differention on R   ... PART 2 ... .png
    Junghenn - 2 - Differention on R ... PART 2 ... .png
    24.1 KB · Views: 380
  • Junghenn - 3 -  Differention on R   ... PART 3 ... .png
    Junghenn - 3 - Differention on R ... PART 3 ... .png
    26.3 KB · Views: 406
  • Junghenn - 4 -  Differention on R   ... PART 4 ... .png
    Junghenn - 4 - Differention on R ... PART 4 ... .png
    15.7 KB · Views: 395
  • #6
Differentiability of ##g(x)## at a point ##x=a## according to definition 9.1.1 means
$$
\lim_{h \to 0} \dfrac{g(a+h)-g(a)}{h} = g\,'(a)
$$
which means that for all ##\varepsilon > 0## we can find a ##\delta(\varepsilon) > 0## such that - as soon as ##|h|< \delta(\varepsilon)## - we have ##||\dfrac{g(a+h)-g(a)}{h} - g\,'(a)|| < \varepsilon##. Now we can set likewise ##g=f_j## or ##g=f##.
If the ##f_j## are differentiable, we find for every ##\varepsilon > 0## a ##\delta_j(\varepsilon) > 0## with ##||\dfrac{f_j(a+h)-f_j(a)}{h} - f_j\,'(a)|| < \varepsilon## and ##||\dfrac{f(a+h)-f(a)}{h} - f\,'(a)||^2 \leq \sum_j |\dfrac{f_j(a+h)-f_j(a)}{h} - f_j\,'(a)|^2 < \sum_j \varepsilon^2 = n\cdot \varepsilon^2 = \varepsilon' ## for all ##|h| < \delta := \min\{\,\delta_j(n \cdot \varepsilon^2) \,\}##. So for every ##\varepsilon' ## there is a ##\delta = \delta(\varepsilon' )## with the required property. The other direction, if we start with the differentiability of ##f## is according.
 
  • #7
fresh_42 said:
Differentiability of ##g(x)## at a point ##x=a## according to definition 9.1.1 means
$$
\lim_{h \to 0} \dfrac{g(a+h)-g(a)}{h} = g\,'(a)
$$
which means that for all ##\varepsilon > 0## we can find a ##\delta(\varepsilon) > 0## such that - as soon as ##|h|< \delta(\varepsilon)## - we have ##||\dfrac{g(a+h)-g(a)}{h} - g\,'(a)|| < \varepsilon##. Now we can set likewise ##g=f_j## or ##g=f##.
If the ##f_j## are differentiable, we find for every ##\varepsilon > 0## a ##\delta_j(\varepsilon) > 0## with ##||\dfrac{f_j(a+h)-f_j(a)}{h} - f_j\,'(a)|| < \varepsilon## and ##||\dfrac{f(a+h)-f(a)}{h} - f\,'(a)||^2 \leq \sum_j |\dfrac{f_j(a+h)-f_j(a)}{h} - f_j\,'(a)|^2 < \sum_j \varepsilon^2 = n\cdot \varepsilon^2 = \varepsilon' ## for all ##|h| < \delta := \min\{\,\delta_j(n \cdot \varepsilon^2) \,\}##. So for every ##\varepsilon' ## there is a ##\delta = \delta(\varepsilon' )## with the required property. The other direction, if we start with the differentiability of ##f## is according.

Hi fresh_42,

Just now reflecting on yur above reply ...

... BUT ... just a minor clarification ...

In the above post, you are dealing with expressions like ## \left\| \dfrac{f_j(a+h)-f_j(a)}{h} - f_j\,'(a) \right\| ## while Junghenn's inequalities deal with expressions like ## \left\| \dfrac{f_j(a+h)-f_j(a)}{h} - x_j \right\|## ... ...

Can you explain this apparent difference...

I find the difference quite perplexing ... how do they mean the same ...?

Peter
 
  • #8
Math Amateur said:
I find the difference quite perplexing ... how do they mean the same ...?
I have chosen the derivatives to concentrate on the argument. Junghenn's notation is a bit better. The point is, that at the start, we do not know whether all derivatives exist, so he has chosen ##x_j,x## as placeholders. The arguments formally go:
##f_j## differentiable ##\Rightarrow |\ldots - f_j'(a)| < \varepsilon_j \Rightarrow ||\ldots - (f_1'(a),\ldots ,f_n'(a)) || < \varepsilon ## with the according choices for the ##\varepsilon_j,\varepsilon## and deltas and so on. Now finally, by the uniqueness of the derivative, we get that ##f\,'(a) = (f_1'(a),\ldots ,f_n'(a))##. Conversely, we have ##||\ldots - f\,'(a)|| < \varepsilon \Rightarrow |\ldots - (f\,'(a))_j| < \varepsilon_j## again with the according choices, and uniqueness again gives us ##f_j'(a) = (f\,'(a))_j \,. ## By writing ##x_j , x## instead, he just saved all these details and could write the two inequalities in one line. Otherwise, you might have objected: "But we don't know the existence of the derivative yet! We want to prove it!" The dummy variables for the derivatives is a short notation, and mine with the derivatives, was rigorously wrong, as I assumed their existence from the start, in order to show how the inequalities work. But in detail it is a combination of the inequalities and the uniqueness argument.
 
  • Like
Likes Math Amateur
  • #9
fresh_42 said:
I have chosen the derivatives to concentrate on the argument. Junghenn's notation is a bit better. The point is, that at the start, we do not know whether all derivatives exist, so he has chosen ##x_j,x## as placeholders. The arguments formally go:
##f_j## differentiable ##\Rightarrow |\ldots - f_j'(a)| < \varepsilon_j \Rightarrow ||\ldots - (f_1'(a),\ldots ,f_n'(a)) || < \varepsilon ## with the according choices for the ##\varepsilon_j,\varepsilon## and deltas and so on. Now finally, by the uniqueness of the derivative, we get that ##f\,'(a) = (f_1'(a),\ldots ,f_n'(a))##. Conversely, we have ##||\ldots - f\,'(a)|| < \varepsilon \Rightarrow |\ldots - (f\,'(a))_j| < \varepsilon_j## again with the according choices, and uniqueness again gives us ##f_j'(a) = (f\,'(a))_j \,. ## By writing ##x_j , x## instead, he just saved all these details and could write the two inequalities in one line. Otherwise, you might have objected: "But we don't know the existence of the derivative yet! We want to prove it!" The dummy variables for the derivatives is a short notation, and mine with the derivatives, was rigorously wrong, as I assumed their existence from the start, in order to show how the inequalities work. But in detail it is a combination of the inequalities and the uniqueness argument.
Thanks fresh_42 ...

... just reflecting on what you have written ...

Thanks for all your help ... it is much appreciated...

Peter
 

1. What is a vector-valued function of a real variable?

A vector-valued function of a real variable is a mathematical function that takes in a real number as its input and outputs a vector (a quantity with both magnitude and direction) in a specific space, such as 2D or 3D space. This function is typically denoted by r(t) or f(t) and can be graphed as a curve or path in a coordinate system.

2. What is the derivative of a vector-valued function?

The derivative of a vector-valued function is a mathematical concept that represents the rate of change of the vector over the input variable. It is a vector itself, with each component representing the derivative of the corresponding component of the original function. Geometrically, it represents the tangent vector to the curve at a specific point.

3. How is the derivative of a vector-valued function calculated?

The derivative of a vector-valued function can be calculated using the same principles as the derivative of a scalar-valued function. It involves finding the limit of the difference quotient as the change in the input variable approaches 0. This limit can be calculated for each component of the vector function, resulting in the derivative vector.

4. What is the significance of the derivative of a vector-valued function?

The derivative of a vector-valued function has several important applications in mathematics and physics. It can be used to find the velocity and acceleration of a moving object, as well as the curvature of a curve. It also plays a crucial role in vector calculus and multivariable calculus, allowing for the calculation of line and surface integrals.

5. Are there any rules or formulas for finding the derivative of a vector-valued function?

Yes, there are several rules and formulas that can be used to find the derivative of a vector-valued function. These include the power rule, product rule, chain rule, and quotient rule. Additionally, there are specific formulas for finding the derivative of a vector function in polar, cylindrical, and spherical coordinates. It is important to note that the rules and formulas used for scalar-valued functions can also be applied to vector-valued functions, as long as the appropriate vector operations are used.

Similar threads

Replies
2
Views
886
  • Topology and Analysis
Replies
4
Views
2K
Replies
5
Views
1K
Replies
2
Views
1K
Replies
2
Views
2K
Replies
1
Views
908
Replies
3
Views
2K
Replies
5
Views
2K
Replies
2
Views
1K
Back
Top