Existence of Partial Derivatives and Continuity ....

Click For Summary

Discussion Overview

The discussion revolves around the proof of a proposition from Shmuel Kantorovitz's book "Several Real Variables," specifically focusing on the derivation of certain formulas related to partial derivatives and continuity. Participants are seeking clarification on specific elements of the proof and the definitions used within it.

Discussion Character

  • Technical explanation
  • Debate/contested
  • Mathematical reasoning

Main Points Raised

  • Peter seeks assistance in deriving the equation ##f(x + h) - f(x) = \sum_{ j = 1}^k [ F_j ( h_j ) - F_j (0) ]## and its equivalence to another formula involving partial derivatives.
  • Andrew expresses confusion over the definition of ##F_j(t)## due to unclear notation in the text, particularly a superscript that appears ambiguous.
  • Andrew later clarifies that the superscript should be ##h^{j-1}##, suggesting that the confusion arose from the quality of the text's printing.
  • Andrew proposes that to prove the formula, one should substitute values into the definition of ##F_j## and questions the necessity of absolute value signs in the proof, suggesting they may be a mistake.
  • Peter acknowledges the difficulty in reading the scan and thanks Andrew for his suggestions, indicating that following Andrew's advice led to a successful result.

Areas of Agreement / Disagreement

Participants generally agree on the steps needed to derive the formulas, but there is some disagreement regarding the interpretation of the notation and the presence of absolute value signs in the proof. The discussion remains unresolved on the correctness of the notation and its implications.

Contextual Notes

The discussion highlights limitations related to the clarity of the text's printing, which affects the interpretation of mathematical symbols and notation. There are also unresolved questions about the necessity of certain elements in the proof.

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading the book "Several Real Variables" by Shmuel Kantorovitz ... ...

I am currently focused on Chapter 2: Derivation ... ...

I need help with another element of the proof of Kantorovitz's Proposition on pages 61-62 ...

Kantorovitz's Proposition on pages 61-62 reads as follows:
Kantorovitz - 1 - Proposition Page 61 ... PART 1 ... .png

Kantorovitz - 2 - Proposition Page 61 ... PART 2 ... .png

In the above proof we read the following:

" ... ... Formula 2.4 is trivially true in case ##h_j = 0##, and by (2.2) - (2.4)

##f(x + h) - f(x) = \sum_{ j = 1}^k [ F_j ( h_j ) - F_j (0) ]##

##= \sum_j h_j \frac{ \partial f }{ \partial x_j } ( x + h^{ j - 1 } + \theta_j h_j e^j )## ... ... ... ... ... "I have tried to derive ##f(x + h) - f(x) = \sum_{ j = 1}^k [ F_j ( h_j ) - F_j (0) ]## but did not succeed ...

... can someone please show how ##f(x + h) - f(x)## equals ##\sum_{ j = 1}^k [ F_j ( h_j ) - F_j (0) ]## ...Also can someone show how the above equals ##\sum_j h_j \frac{ \partial f }{ \partial x_j } ( x + h^{ j - 1 } + \theta_j h_j e^j )## ... ...

Help will be much appreciated ... ...

Peter
 

Attachments

  • Kantorovitz - 1 - Proposition Page 61 ... PART 1 ... .png
    Kantorovitz - 1 - Proposition Page 61 ... PART 1 ... .png
    49.2 KB · Views: 887
  • Kantorovitz - 2 - Proposition Page 61 ... PART 2 ... .png
    Kantorovitz - 2 - Proposition Page 61 ... PART 2 ... .png
    19.5 KB · Views: 670
Physics news on Phys.org
Hello Peter. It's good to see you posting again. Haven't seen you for some time.

I can't make sense of the line that defines ##F_j(t)##, between 2.2 and 2.3. There's a big space containing a fuzzy superscript-like mark that is a bit like a ##t## or a 1, but doesn't exactly match either. I can't think of any way to interpret it. It can't be an exponent, as a vector can't be raised to a power.

Do you know what that line is trying to do?

PS Also ##e^j## appears undefined. Is the author referring to the vector with all zero components except for a 1 in the ##j##th position?
 
  • Like
Likes   Reactions: Math Amateur
Ah OK, got it now. It's actually ##h^{j-1}## but the minus sign is invisible in the scan, and the 1 doesn't quite look like a 1.

To prove that formula, look back at 2.2. All we need to do is prove that ##F_j(h_j)=f(x+h^j)## and ##F_j(0)=f(x+h^{j-1})##. To do that, just substitute ##h_j## and 0 into the formula that defines ##F_j##. It's easy for the 0 case. The other case might require a bit of extra thought, involving the relationship between ##h^j## and ##h^{j-1}##.

For the last bit, substitute the RHS of 2.4 into the bit inside the absolute value signs in the next line. Then use 2.3 to replace the derivative of ##F_j## by a partial derivative of ##f## (RHS of 2.3), inside that absolute value.

But how to get rid of the absolute value signs? I think the answer is that they should not be there. They are a mistake. Look at where they are introduced in 2.2. No reason is given for them and inspection of that formula suggests it makes more sense if the absolute value signs are just replaced by parentheses. Indeed, for 2.2 to be true as written, ##x## would have to be a local minimum of ##f##, and that is not in the premises of the proposition. The absolute-valueness of them is not used anywhere in the proof. Indeed, the proof goes through if they are changed to parentheses, but not otherwise.
 
Last edited:
  • Like
Likes   Reactions: Math Amateur
andrewkirk said:
Ah OK, got it now. It's actually ##h^{j-1}## but the minus sign is invisible in the scan, and the 1 doesn't quite look like a 1.

To prove that formula, look back at 2.2. All we need to do is prove that ##F_j(h_j)=f(x+h^j)## and ##F_j(0)=f(x+h^{j-1})##. To do that, just substitute ##h_j## and 0 into the formula that defines ##F_j##. It's easy for the 0 case. The other case might require a bit of extra thought, involving the relationship between ##h^j## and ##h^{j-1}##.

For the last bit, substitute the RHS of 2.4 into the bit inside the absolute value signs in the next line. Then use 2.3 to replace the derivative of ##F_j## by a partial derivative of ##f## (RHS of 2.3), inside that absolute value.

But how to get rid of the absolute value signs? I think the answer is that they should not be there. They are a mistake. Look at where they are introduced in 2.2. No reason is given for them and inspection of that formula suggests it makes more sense if the absolute value signs are just replaced by parentheses. Indeed, for 2.2 to be true as written, ##x## would have to be a local minimum of ##f##, and that is not in the premises of the proposition. The absolute-valueness of them is not used anywhere in the proof. Indeed, the proof goes through if they are changed to parentheses, but not otherwise.
Hi Andrew,

Good to hear from you ...

Sorry about difficulty of reading scan ... it's to do with the nature and quality of the printing in the text ... in particular ... brackets ... ie [ and ] look like absolute value signs ... sorry about misleading you ...

Will now read your next post ...

Peter
 
Thanks so much, Andrew ...

Did what you suggested ... and result was achieved ... !

Thanks again!

Peter
 

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 8 ·
Replies
8
Views
2K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
1
Views
1K
Replies
2
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K