Can someone explain (Differential Forms)

latentcorpse
Messages
1,411
Reaction score
0
(i) if \alpha=\sum_i \alpha_i(x) dx_i \in \Omega^1, \beta=\sum_j \beta_j(x) dx_j

then\alpha \wedge \beta = \sum_{i,j} \alpha_i(x) \beta_j(x) dx_i \wdge dx_j \in \Omega^2

NOW THE STEP I DON'T FOLLOW - he jumps to this in the lecture notes:

\alpha \wedge \beta = \sum_{i<j} (\alpha_i \beta_j - \beta_j \alpha_i) dx_i \wedge dx_j

the subscript on the sum was either i<j or i,j - could someone tell me which as well as explaing where on Earth this step comes from.

(ii) could someone explain the Leibniz rule for exterior derivative d: \Omega^k \rightarrow \Omega^{k+1}

i.e. why d(\alpha^k \wedge \beta^l)=d \alpha^k \wedge \beta^l + (-1)^k \alpha^k \wedge d \beta^l
note that the superscript on the differential form indicates that it's a k form or and l form

my main problem here is where the (-1)^k comes from

cheers for your help
 
Physics news on Phys.org
Hi latentcorpse! :smile:
latentcorpse said:
\alpha \wedge \beta = \sum_{i&lt;j} (\alpha_i \beta_j - \beta_j \alpha_i) dx_i \wedge dx_j

the subscript on the sum was either i<j or i,j - could someone tell me which as well as explaing where on Earth this step comes from.

It's i < j …

\alpha \wedge \beta = \sum_{i,j} (\alpha_i \beta_j) dx_i \wedge dx_j = \sum_{i&lt;j} (\alpha_i \beta_j - \beta_j \alpha_i) dx_i \wedge dx_j

since wedge is anti-commutative.
why d(\alpha^k \wedge \beta^l)=d \alpha^k \wedge \beta^l + (-1)^k \alpha^k \wedge d \beta^l

my main problem here is where the (-1)^k comes from

d() is like another wedge, so if you move d() through k elementary forms, you multiply it by (-1)k :wink:
 
cool i get the 2nd part now - still not sure why it's i<j or why \alpha_i \beta_j suddenly becomes \alpha_i \beta_j-\beta_j \alpha_i
 
(have an alpha: α and a beta: β and a wedge: ∧ :wink:)
latentcorpse said:
… still not sure why it's i<j or why \alpha_i \beta_j suddenly becomes \alpha_i \beta_j-\beta_j \alpha_i

because αi ∧ βj = -βj ∧ αi

so if you sum αi ∧ βj over all i and j,

then for i > j you just "turn it round" :wink:
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top