How can the interior product be visualized using a concrete example?

In summary, the interior product is the unique antiderivation of degree −1 on the exterior algebra such that on one-forms α\displaystyle\iota_X \alpha = \alpha(X) = \langle \alpha,X \rangle,One should view it as the generalization of matrix multiplication to higher rank tensors. WWGD has already given the appropriate formulas.
  • #1
JonnyMaddox
74
1
In Nakahara's book, the interior product is defined like this :
[itex]i_{x} \omega = \frac{1}{r!} \sum\limits_{s=1}^r X^{\mu_{s}} \omega_{\mu_{1}...\mu_{s}...\mu_{r}}(-1)^{s-1}dx^{\mu_{1}} \wedge ...\wedge dx^{u_{s}} \wedge...\wedge dx^{\mu_{r}}[/itex]

Can someone give me please a concret example of this? I can't make sense out of it. For example, how does this look explicit with [itex]i_{e_{x}}(dx \wedge dy) = dy[/itex]?

Greets
 
Physics news on Phys.org
  • #2
I think the Wiki article is good:
http://en.wikipedia.org/wiki/Interior_product

[itex]( \iota_X\omega )(X_1,\ldots,X_{p-1})=\omega(X,X_1,\ldots,X_{p-1})[/itex]
for any vector fields [itex] X1,..., Xp−1.[/itex]

The interior product is the unique antiderivation of degree −1 on the exterior algebra such that on one-forms α
[itex]\displaystyle\iota_X \alpha = \alpha(X) = \langle \alpha,X \rangle,[/itex]
 
Last edited:
  • #3
One should view it as the generalization of matrix multiplication to higher rank tensors. WWGD has already given the appropriate formulas.
EDIT:
To compute
$$\partial_1 \cdot (d x^1 \wedge d x^2) $$
where ##X \cdot \equiv i_{X}## in your notation, I recommend to use
$$X \cdot (\alpha \wedge \beta) = (X \cdot \alpha )\wedge \beta + (-1)^k \, \alpha \wedge (X \cdot \beta)$$
for ##\alpha## a ##k##-form and ##\beta## an ##l##-form, which is a really useful formula. Try it!
As you asked to use the definition
$$\omega= d x^1 \wedge d x^2 = \frac{1}{2} \omega_{ij} \, d x^i \wedge d x^j =
\begin{pmatrix}
0 & 1 \\
-1 & 0
\end{pmatrix}
$$
$$X = \partial_1 = X^i \, \partial_i =\begin{pmatrix}
1 \\
0
\end{pmatrix}$$
hence
$$X \cdot \omega = \frac{1}{2} (X^i \omega_{ij} \, d x^j - X^i \omega_{ji} \, d x^j)
= X^i \omega_{ij} \, d x^j =
\begin{pmatrix}
1 & 0
\end{pmatrix}
\cdot \begin{pmatrix}
0 & 1 \\
-1 & 0
\end{pmatrix} = d x^2$$
Here you also see what I meant with it just being ordinary matrix multiplication.
 
Last edited:
  • #4
Hey, sry I didn't notice your answer. Thank you very much. I think about it. I really have to get used to this notation.

Greets
 
  • #5
Don't be discouraged. It's all just vectors, covectors and matricies multiplied from the right and left. If you have a real vector space ##V## and a basis ##e_1, \dots, e_n##, then
$$v= v^i \, e_i .$$
If you have a functional ##\phi##, that is a map that takes a vector and spits out a real number, then you can write it as a covector
$$\phi = \phi_i \, e^i$$
where ##e^i## is the dual vector such that
$$e^i \cdot e_j = \delta^i_j \, .$$
This is a consequence of Riesz representation theorem in the special case of a finite dimensional inner product space, which is necessarily a hilbert space.
This is how it works:
$$\phi(v) \equiv \phi \cdot v = \phi_i \, e^i \cdot v^j \, e_j = \phi_i \, v^j \, (e^i \cdot e_j)
= \phi_i \, v^j \, \delta^i_j = \phi_i \, v^i \, .$$
Now, a "normal" matrix ##A## (in your vector space ##V##) is something that takes a vector and spits out a vector, so
$$A = A^i{}_j \, e_i \otimes e^j \, .$$
In a similar manner, you can construct maps that take a vector and spit out a covector, maps that take a covector and spit out a vector, etc.
This differential geometry stuff is just ##V## being the tangent space and
$$e_i = \partial_i := \frac{\partial}{\partial x^i} \quad, \quad e^i = d x^i \, .$$
The "interior product" is just multiplication.
 
  • #6
Hi, thx, yea that really helped, I have to think more in terms of matrices. So when I have a form with coefficients like [itex]w_{\mu v}[/itex] then this would be a matrix full of forms and this would take a vector(field) as input and a 1-form as output?. But I'm not sure what the role of the tensor product here is. I know that the forms are defined via the tensor product and I know how to take the tensor product of two vectors to get a "tensor matrix", but how does this mix all together?
 
  • #7
JonnyMaddox said:
Hi, thx, yea that really helped, I have to think more in terms of matrices. So when I have a form with coefficients like [itex]w_{\mu v}[/itex] then this would be a matrix full of forms and this would take a vector(field) as input and a 1-form as output?. But I'm not sure what the role of the tensor product here is. I know that the forms are defined via the tensor product and I know how to take the tensor product of two vectors to get a "tensor matrix", but how does this mix all together?

Actually, the product in the subalgebra of differential forms ( a subalgebra of the (graded) exterior algebra ##V^{n\otimes}## of a vector space V, where the product on tensors is actually the tensor product) is the wedge product. The differential forms are the subalgebra of alternating tensors, and the product in the (also graded, by the degree) subalgebra is the wedge product. And the only matrix I know of with forms as entries is the connection form, tho maybe there is some other one out there I don't know of.
 
Last edited:
  • #8
You have to go back to linear algebra. A matrix is not a collection of numbers written in rows and columns, a matrix is an "abstract" linear map that takes in a vector and spits out a vector or equivalently, that takes a vector from the right and a covector from the left and gives you a number. The collection of numbers written in rows and columns is just a representation of a matrix. Just like you can write vectors with respect to different bases you can write matricies with respect to different bases. The notation
$$A= A^i{}_j \, e_i \otimes e^j$$
just makes this explicit. On a conceptual level, you get the components by taking your "abstract" matrix ##A## and computing
$$A^i{}_j := A(e^i, e_j) \equiv e^i \cdot A \cdot e_j \, \in \mathbb R$$

Analogously, in differential geometry, you have an "abstract" ##k##-form ##\omega## and a basis ##(\partial_\mu)_{\mu=0}^n##, and you define
$$\omega_{\mu_1 \dots \mu_k} := \omega( \partial_{\mu_1}, \dots, \partial_{\mu_k})
\equiv \partial_{\mu_k} \cdot ( \partial_{\mu_{k-1}} \cdot ( \dots (\partial_{\mu_1} \cdot \omega)))\cdots$$
such that you can write ##\omega## (at least locally) as
$$\omega = \omega_{\mu_1 \dots \mu_k} \, d x^{\mu_1} \otimes \dots \otimes d x^{\mu_k} = \omega_{[\mu_1 \dots \mu_k]} \, d x^{\mu_1} \otimes \dots \otimes d x^{\mu_k} $$
$$ = \omega_{\mu_1 \dots \mu_k} \, d x^{[\mu_1} \otimes \dots \otimes d x^{\mu_k]}
= \frac{1}{k!}\omega_{\mu_1 \dots \mu_k} \, d x^{\mu_1} \wedge \dots \wedge d x^{\mu_k} \, .$$
So the ##\omega_{\mu_1 \dots \mu_k}## are just (local) functions on your manifold. This also shows how the different tensor products relate.

So if you take a vector field ##X=X^\mu \, \partial_\mu##, then
$$X \cdot \omega = (X^\nu \, \partial_\nu ) \cdot (\omega_{\mu_1 \dots \mu_k} \, d x^{\mu_1} \otimes \dots \otimes d x^{\mu_k}) = X^\nu \omega_{\mu_1 \dots \mu_k} (\partial_\nu \cdot d x^{\mu_1}) \otimes
d x^{\mu_2} \otimes \dots \otimes d x^{\mu_k} $$
$$ = X^\nu \omega_{\mu_1 \dots \mu_k} \delta_\nu ^{\mu_1} \,
d x^{\mu_2} \otimes \dots \otimes d x^{\mu_k} = \omega_{\mu_1 \dots \mu_k} \, X^{\mu_1} d x^{\mu_2} \otimes \dots \otimes d x^{\mu_k}
= \frac{1}{(k-1)! } \omega_{\mu_1 \dots \mu_k} \, X^{\mu_1} d x^{\mu_2} \wedge \dots \wedge d x^{\mu_k}$$
So all you do is "plugging" ##X## into the first "slot". Nothing complicated here if viewed in the right light.
 
Last edited:
  • #9
Nice, Geometry Dude, you cleared up my confusion on the use of tensors with "differential forms" (I thought the algebra product on differential forms is the wedge product and not the tensor product). These are actually tensors and not really forms. After "Alternizing" ( transforming the tensors into being alternating) , the tensors become forms, and then we use the wedge on them.
 
  • #10
Yes, mathematicians tend to make everything complicated by talking about isomorphisms and operators all the time without actually writing down what they're doing. This can become really confusing when you're trying to use this stuff. This calculation shows that the wedge product is just an anti-symmetrized tensor product. You can define a symmetrized product ##\vee## analogously. Took me a while to figure this stuff out myself.
 
  • #11
As a mathematician ( in training ), I think the problem is often just that of mathematicians doing a poor job of teaching mathematics, or not knowing the subject well-enough themselves (people are hired , not because they are good teachers, but because the schools want the prestige of the research done by hirees.)

Still, to be fair, it is difficult to simplify and be clear when using the maze of indices, subscripts and superscripts and just nasty notation that seems intrinsic to differential geometry.
 
  • #12
Ok thanks really for the afford Geometry_dude. I think you mean it like this:

When you take the general definitions
[itex]\alpha^{1} \wedge \beta^{1} = \alpha \otimes \beta - \beta \otimes \alpha[/itex](for one forms)
[itex]\alpha \wedge \beta(X,W) =\alpha(X)\beta(W)-\beta(X)\alpha(W)[/itex][itex]X=y\frac{\partial}{\partial x}+2z\frac{\partial}{\partial y}+3xy\frac{\partial}{\partial z}[/itex]

[itex]\omega=3dx \wedge dy -(14zx+2) dx \wedge dz[/itex]

[itex]\omega = 3(dx\otimes dy - dy \otimes dx) - 14zx (dx\otimes dz - dz\otimes dx)[/itex][itex]\omega = 3(dx(X)\otimes dy - dy(X) \otimes dx) - 14zx (dx(X)\otimes dz - dz(X)\otimes dx)[/itex]

[itex]\omega = 3[dx(y\frac{\partial}{\partial x}+2z\frac{\partial}{\partial y}+3xy\frac{\partial}{\partial z})\otimes dy - dy(y\frac{\partial}{\partial x}+2z\frac{\partial}{\partial y}+3xy\frac{\partial}{\partial z}) \otimes dx] - 14zx [dx(y\frac{\partial}{\partial x}+2z\frac{\partial}{\partial y}+3xy\frac{\partial}{\partial z})\otimes dz - dz(y\frac{\partial}{\partial x}+2z\frac{\partial}{\partial y}+3xy\frac{\partial}{\partial z})\otimes dx][/itex]

And then use the orthogonality relation [itex]\delta^{i}_{j}[/itex]. But thinking about this in terms of matrix multiplication is easier or not? How would this look like with matrix multiplication?

@WWGD: What I meant by matrix of forms was about the post of Geometry_dude with [itex]\omega = dx^{1} \wedge dx^{2}= \frac{1}{2}\omega_{ij}dx^{i}\wedge dx^{j} = \begin{pmatrix} \omega_{11}dx^{1}\wedge dx^{1} & \omega_{12}dx^{1}\wedge dx^{2} \\ \omega_{21}dx^{2}\wedge dx^{1} & \omega_{22}dx^{2}\wedge dx^{2} \end{pmatrix} = \begin{pmatrix} 0 & 1 \\ -1 & 0 \end{pmatrix} [/itex] I think this is what you meant Geometry_dude right?
 

What is the "Formula for Interior Product"?

The "Formula for Interior Product" is a mathematical equation used in linear algebra to calculate the product of two vectors. It is denoted as u∙v and is also known as the inner product or dot product.

How is the formula for interior product calculated?

The formula for interior product is calculated by multiplying the corresponding components of two vectors and adding them together. For example, if vector u has components u1, u2, and u3, and vector v has components v1, v2, and v3, then the formula for interior product u∙v is u1v1 + u2v2 + u3v3.

What is the significance of the formula for interior product in interior design?

The formula for interior product is not directly related to interior design. However, it is commonly used in computer graphics to calculate lighting and shading in 3D rendering, which is an important aspect of interior design visualization.

What are some applications of the formula for interior product?

Aside from its uses in linear algebra and computer graphics, the formula for interior product has applications in physics, engineering, and economics. It is also used in machine learning algorithms and in calculating the work done by a force on an object.

Are there any limitations to the formula for interior product?

One limitation of the formula for interior product is that it can only be applied to vectors in three-dimensional space. It also assumes that the vectors are in standard position (starting at the origin). Additionally, the formula is not defined for complex vectors.

Similar threads

  • Differential Geometry
Replies
5
Views
2K
  • Differential Geometry
Replies
5
Views
7K
  • Differential Geometry
Replies
2
Views
2K
  • Differential Geometry
Replies
7
Views
5K
  • Differential Geometry
Replies
1
Views
1K
  • Advanced Physics Homework Help
Replies
2
Views
1K
  • Differential Geometry
Replies
2
Views
2K
  • Advanced Physics Homework Help
Replies
10
Views
2K
  • Differential Geometry
Replies
4
Views
2K
  • Differential Geometry
Replies
5
Views
6K
Back
Top