Differential as generalized directional deriv (Munkres Analysis on Manifolds)

Click For Summary
SUMMARY

The discussion centers on proving the equality \( h(x) = \sum_{j=1}^k (-1)^{j-1} Dg_j(x) \cdot v_j \) for a k-1 form \( \omega \) in an open set \( A \subset \mathbb{R}^n \). The problem is divided into three parts, with part (b) focusing on verifying the theorem when \( \omega = fdx_I \). Participants clarify the definitions of terms such as \( Dg_j \) as the gradient and discuss the application of the wedge product in the proof. The use of induction is suggested as a potential method to complete the proof.

PREREQUISITES
  • Understanding of differential forms and their properties.
  • Familiarity with the wedge product and its application in multivariable calculus.
  • Knowledge of directional derivatives and gradients in \( \mathbb{R}^n \).
  • Basic concepts of determinants and their role in linear algebra.
NEXT STEPS
  • Study the properties of differential forms in Munkres' "Analysis on Manifolds".
  • Learn about the wedge product and its implications in vector calculus.
  • Explore the concept of directional derivatives and their computation in \( \mathbb{R}^n \).
  • Investigate the use of induction in mathematical proofs, particularly in the context of linear algebra.
USEFUL FOR

Mathematics students, particularly those studying advanced calculus or differential geometry, as well as educators looking to deepen their understanding of differential forms and their applications in analysis.

mathmonkey
Messages
33
Reaction score
0

Homework Statement



Let ##A## be open in ##\mathbb{R}^n##; let ##\omega## be a k-1 form in ##A##. Given ##v_1,...,v_k \in \mathbb{R}^n##, define
##h(x) = d\omega(x)((x;v_1),...,(x;v_k)),##
##g_j(x) = \omega (x)((x;v_1),...,\widehat{(x;v_j)},...,(x;v_k)),##
where ##\hat{a}## means that the component ##a## is to be omitted.

Prove that ##h(x) = \sum _{j=1}^k (-1)^{j-1} Dg_j (x) \cdot v_j . ##


Homework Equations



The problem is broken into 3 parts:
(a) Let ##X = \begin{bmatrix} v_1 ... v_k \end{bmatrix}##. For each ##j## let ##Y_j = \begin{bmatrix}v_1 ... \hat{v}_j ... v_k \end{bmatrix}##. Given ##(i, i_1,...,i_{k-1})##, show that

##detX(i,i_1,...,i_{k-1}) = \sum _{j=1}^k (-1)^{j-1}v_{ij}detY_j(i_1,...,i_{k-1}).##
(b) Verify the theorem in the case ##\omega = fdx_I##.
(c) Complete the proof.

The Attempt at a Solution



I'm stuck on part (b), however. By the definition given in the text, if ##\omega = fdx_I## then ##d\omega = df \wedge dx_I##. I'm not quite sure how to link the result of part (a) to prove part (b). If anyone can shed any light on this problem I'd be really grateful! Thanks.
 
Physics news on Phys.org
Sorry, but I am not familiar with your notation. What is (x;v_i )? Also, what is v_{ij}? The j^{th} component of v_i? What is Dg_j? The Jacobian? The gradient (which is possible under the identification of T_p^* \mathbb R^n \cong \mathbb R^n? What is Dg_j \cdot v? Is this the standard Euclidean product? Is I a multi-index or a typo?

I will assume that Dg_j is the gradient so that Dg_j \cdot v_j is the directional derivative.

I'm not sure what you are and are not allowed to use, but if \omega = f \ dx_I then you are correct that d\omega = df \wedge dx_I. Thus for two vector fields v,w we have that
<br /> \begin{align*}<br /> d \omega &amp;= df \wedge dx_I(v,w) \\<br /> &amp;= df(v) dx_I(w) - dx_I(v)df(w) \\<br /> &amp;= w_i Df\cdot v - v_i Df\cdot w.<br /> \end{align*}<br />
With the second equality occurring by definition of the wedge product. Appropriate substitution of your vectors yields the desired equality. Perhaps induction will now work?

Edit: Had to do some craziness with a misplaced tex wrapper.
 
Last edited:

Similar threads

  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 6 ·
Replies
6
Views
4K
Replies
5
Views
2K
  • · Replies 2 ·
Replies
2
Views
3K
  • · Replies 0 ·
Replies
0
Views
4K
Replies
1
Views
3K
Replies
31
Views
4K
  • · Replies 6 ·
Replies
6
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 8 ·
Replies
8
Views
3K