Differential as generalized directional deriv (Munkres Analysis on Manifolds)

mathmonkey
Messages
33
Reaction score
0

Homework Statement



Let ##A## be open in ##\mathbb{R}^n##; let ##\omega## be a k-1 form in ##A##. Given ##v_1,...,v_k \in \mathbb{R}^n##, define
##h(x) = d\omega(x)((x;v_1),...,(x;v_k)),##
##g_j(x) = \omega (x)((x;v_1),...,\widehat{(x;v_j)},...,(x;v_k)),##
where ##\hat{a}## means that the component ##a## is to be omitted.

Prove that ##h(x) = \sum _{j=1}^k (-1)^{j-1} Dg_j (x) \cdot v_j . ##


Homework Equations



The problem is broken into 3 parts:
(a) Let ##X = \begin{bmatrix} v_1 ... v_k \end{bmatrix}##. For each ##j## let ##Y_j = \begin{bmatrix}v_1 ... \hat{v}_j ... v_k \end{bmatrix}##. Given ##(i, i_1,...,i_{k-1})##, show that

##detX(i,i_1,...,i_{k-1}) = \sum _{j=1}^k (-1)^{j-1}v_{ij}detY_j(i_1,...,i_{k-1}).##
(b) Verify the theorem in the case ##\omega = fdx_I##.
(c) Complete the proof.

The Attempt at a Solution



I'm stuck on part (b), however. By the definition given in the text, if ##\omega = fdx_I## then ##d\omega = df \wedge dx_I##. I'm not quite sure how to link the result of part (a) to prove part (b). If anyone can shed any light on this problem I'd be really grateful! Thanks.
 
Physics news on Phys.org
Sorry, but I am not familiar with your notation. What is (x;v_i )? Also, what is v_{ij}? The j^{th} component of v_i? What is Dg_j? The Jacobian? The gradient (which is possible under the identification of T_p^* \mathbb R^n \cong \mathbb R^n? What is Dg_j \cdot v? Is this the standard Euclidean product? Is I a multi-index or a typo?

I will assume that Dg_j is the gradient so that Dg_j \cdot v_j is the directional derivative.

I'm not sure what you are and are not allowed to use, but if \omega = f \ dx_I then you are correct that d\omega = df \wedge dx_I. Thus for two vector fields v,w we have that
<br /> \begin{align*}<br /> d \omega &amp;= df \wedge dx_I(v,w) \\<br /> &amp;= df(v) dx_I(w) - dx_I(v)df(w) \\<br /> &amp;= w_i Df\cdot v - v_i Df\cdot w.<br /> \end{align*}<br />
With the second equality occurring by definition of the wedge product. Appropriate substitution of your vectors yields the desired equality. Perhaps induction will now work?

Edit: Had to do some craziness with a misplaced tex wrapper.
 
Last edited:
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top