Tangent Planes to Graphs of Functions from Rn->Rm

Click For Summary

Homework Help Overview

The discussion revolves around finding the linear equations of the tangent plane in R4 to the graph of the mapping F: R2->R2 defined by F(x1, x2) = (sin(x1 - x2), cos(x1 + x2)) at a specific point. The problem is situated within the context of calculus, particularly focusing on tangent planes to multi-dimensional graphs.

Discussion Character

  • Exploratory, Conceptual clarification, Mathematical reasoning, Assumption checking

Approaches and Questions Raised

  • Participants discuss various methods for finding the tangent plane, including a reference to a specific example from Edwards' book and an alternative method from a Stanford webpage. There is an exploration of the derivative matrix and its implications for the orthogonal complement in the context of the problem.

Discussion Status

Some participants have made progress in understanding the problem and have identified a key issue regarding the dimensionality of the orthogonal complement. There is an acknowledgment of the equivalence of different methods, and one participant has resolved their confusion about the derivative matrix needed for the problem.

Contextual Notes

Participants note the challenge of working with the dimensionality of the derivative matrix and the orthogonal complement, which is central to applying Edwards' method correctly. There is also a request for clarification on deriving the appropriate derivative matrix for the graph of F.

Disinterred
Messages
38
Reaction score
0
1. This is problem 2.10 from the book "Calculus of Several Variables by C.H Edwards":

Let the mapping F: R2->R2 be defined by F(x1 , x2) = (sin(x1 - x2), cos(x1 + x2)). Find the linear equations of the tangent plane in R4 to the graph of F at the point (PI/4, PI/4, 0 , 0 )


The attempt at a solution

First off, I would greatly appreciate if anyone knew any other references (books/webpages) on this topic. So far the only book I found that discusses tangent planes to n+m dimensional graphs is Edwards and this page: math.stanford.edu/~genauer/TangentGraph.pdf. The latter of which does not give enough motivation (to me atleast) for their solution of the problem, but I can reproduce their results easily for the question in Edwards.

But I would like to use the method outlined in Edward's book to solve this question. He solves for the linear equations of a tangent plane in an example for the graph of a function F:R2->R4 where F(x1, x2) = (x2,x1,x1*x2, (x1)^2 - (x1)^2) at the point (a, F(a)) where a = (1,2)

He starts off by finding the image of the linear mapping dFa:R2->R4. by computing the derivative matrix and then separating this matrix into two column vectors which would span the image space.

Then he finds the orthogonal complement to the image space of dFa which is of dimension 4-2=2, and using the column vectors from this to write an equation of the form Ax=0
where A is the matrix made up of the column vectors that span the orthogonal complement set and x being a m dimensional point an element of the image space.

and finally to get the full form for the equation(s) of the tangent plane at the point F(a) he explicitly writes out the translated equation A( x - F(a) ) = 0

Now here's my problem, for the question I stated, the orthogonal complement has dimension 2-2 = 0, so I cannot proceed by Edwards "algorithm" for finding tangent planes for this function. I can however find an answer via the method proposed on the stanford.edu site, but I only understand their method on a very superficial level.

Any help is greatly appreciated!

Thanks
Disinterred
 
Physics news on Phys.org
Since that function is from [itex]R^2[/itex] to [itex]R^2[/itex], its derivative is a 2 by 2 matrix (more correctly, it is the linear transformation defined by that matrix)
[tex]\begin{bmatrix}\frac{\partial sin(x_1- x_2}{\partial x_1} & \frac{\partial cos(x_1- x_2)}{\partial x_1} \\ \frac{\partial sin(x_1- x_2}{\partial x_2} & \frac{\partial cos(x_1- x_2)}{\partial x_2}\end{bmatrix}[/tex]
[tex]= \begin{bmatrix}cos(x_1- x_2) & - sin(x_1- x_2) \\ -cos(x_1- x_2) & sin(x_1- x_2)\end{bmatrix}[/tex]

At [itex]x_1= x_2= \pi/4[/itex] [itex]x_1- x_2= 0[/itex]. cos(0)= 1, sin(0)= 0 so that is
[tex]\begin{bmatrix}1 & 0 \\ -1 & 0\end{bmatrix}[/tex]

and since the function value there is (0, 0), the plane is given by
[tex]\begin{bmatrix}y_1 \\ y_2\end{bmatrix}= \begin{bmatrix}1 & 0 \\ -1 & 0\end{bmatrix}\begin{bmatrix}x_1 \\ x_2 \end{bmatrix}+ \begin{bmatrix}0 \\ 0 \end{bmatrix}[/tex]
 
HallsofIvy said:
Since that function is from [itex]R^2[/itex] to [itex]R^2[/itex], its derivative is a 2 by 2 matrix (more correctly, it is the linear transformation defined by that matrix)
[tex]\begin{bmatrix}\frac{\partial sin(x_1- x_2}{\partial x_1} & \frac{\partial cos(x_1- x_2)}{\partial x_1} \\ \frac{\partial sin(x_1- x_2}{\partial x_2} & \frac{\partial cos(x_1- x_2)}{\partial x_2}\end{bmatrix}[/tex]
[tex]= \begin{bmatrix}cos(x_1- x_2) & - sin(x_1- x_2) \\ -cos(x_1- x_2) & sin(x_1- x_2)\end{bmatrix}[/tex]

At [itex]x_1= x_2= \pi/4[/itex] [itex]x_1- x_2= 0[/itex]. cos(0)= 1, sin(0)= 0 so that is
[tex]\begin{bmatrix}1 & 0 \\ -1 & 0\end{bmatrix}[/tex]

and since the function value there is (0, 0), the plane is given by
[tex]\begin{bmatrix}y_1 \\ y_2\end{bmatrix}= \begin{bmatrix}1 & 0 \\ -1 & 0\end{bmatrix}\begin{bmatrix}x_1 \\ x_2 \end{bmatrix}+ \begin{bmatrix}0 \\ 0 \end{bmatrix}[/tex]

Thank you for your help! I get this answer too using the method proposed on that website I posted. But is it possible at all to use Edward's method? I know Edward's method is equivalent (atleast from his example in the book) to the method you just posted here since they both arrive at the same answer, but it seems to break down for my question since a orthogonal complement set cannot be found for the problem. In any case, I will investigate it and post here if I find a resolution.

Thank you for your help, it is much appreciated!
 
Last edited:
Okay I figured out my problem, in the example in Edwards he finds the derivative matrix of the differential of the function f, which for my question would be a 2x2 matrix and thus the orthogonal complement would be a 2-2 = 0 dimensional space. What instead I need to do in this question is find the derivative matrix of the graph of F, which is a 4x2 matrix. Then the orthogonal complement will be a 4-2=2 dimensional space and then we can proceed as normal via Edwards algorithm.

When I did this, I get the the exact same answer as I would get if I used the method outlined on the math.standford.edu webpage (i.e. the answer HallofIvy gets)

Cheers
Disinterred
 
Disinterred said:
Okay I figured out my problem, in the example in Edwards he finds the derivative matrix of the differential of the function f, which for my question would be a 2x2 matrix and thus the orthogonal complement would be a 2-2 = 0 dimensional space. What instead I need to do in this question is find the derivative matrix of the graph of F, which is a 4x2 matrix. Then the orthogonal complement will be a 4-2=2 dimensional space and then we can proceed as normal via Edwards algorithm.

When I did this, I get the the exact same answer as I would get if I used the method outlined on the math.standford.edu webpage (i.e. the answer HallofIvy gets)

Cheers
Disinterred

Hey guys,
I would appreciate if you could explain to me how do you get derivative matrix (4x2) of the "graph" of F in your last post? and what are the final equations?

Thanks in advance!
 

Similar threads

  • · Replies 6 ·
Replies
6
Views
2K
Replies
8
Views
2K
  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 8 ·
Replies
8
Views
2K
Replies
5
Views
2K
  • · Replies 1 ·
Replies
1
Views
2K
Replies
7
Views
2K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 9 ·
Replies
9
Views
4K
  • · Replies 3 ·
Replies
3
Views
3K