B3NR4Y said:
Okay, I feel like I understand a bit more now. The dummy indices thing seems so obvious, and yeah I did get the identity wrong now that I have my book to look at.
So, to check my understanding of the cross product fully, the notation is
(\nabla \times \vec{V})_{ i } = \epsilon_{ijk} \partial_{j}V_{k} \vec{e}_{ i }
Which means the i-th component of the curl of the vector field V is equal to \epsilon_{ijk} \partial_{j}V_{k} \vec{e}_{ i } and this says that the first component (the i-th component if (1,2,3) ⇔ (i,j,k) in the basis vectors) is equal to
(\nabla \times \vec{V})_{ 1 } = \epsilon_{1jk} \partial_{j}V_{k} \vec{e}_{ 1 } = \sum_{j=1}^{3} \sum_{k=1}^{3} \epsilon_{1jk}\partial_{j}V_{k}\vec{e}_1 = (\partial_{2} V_{3}-\partial_3 V_{2}) \vec{e}_{1}
This makes more sense to me now, and I suppose I see why Einstein called it his greatest contribution to mathematics. I also read on wikipedia that the indices you're summing over for the dot product, for example, should have one low and one high xiyi.
That's perfectly right. The convention for summing only high and low pairs of indices is meant to emphasize the fact that the proper environment for these interactions is within the context of dual vector spaces. In basic courses, the dot product is usually just referred to the way you wrote it, as products of components of two vectors from the same vector space being summed over. However, you may note that if we look at matrix multiplication, this outcome only makes sense if one of the vectors is written as a row matrix and the other is a column matrix. We can also look at the action of a matrix on a column vector as a column of row vectors acting on a single column vector to give a list of scalars. So we get the idea of row vectors being operators for column vectors: each row vector is a function that maps column vectors to real numbers. So we may write the dot product as an interaction between a row vector and a column vector, hence the lowered and raised indices.
The question then arises as to whether it makes sense to talk about dot products as being related to actions of these types of "linear functionals", which themselves form a vector space, on vectors. The Riesz representation theorem, which talks about more general vector spaces than can be covered by matrix algebra, covered and started this whole notion of dual vector spaces. However, it is more than mere convention that we use the extended notion of dual vector spaces. It becomes increasingly unavoidable to start separating vectors belonging to different types and different places when we start talking about curved spaces and infinite dimensional spaces, such as the curved manifolds of general relativity and the rigged Hilbert spaces of quantum mechanics. I think I will let the book or someone more qualified introduce that notion, though, as it is very involved, and I will probably mess it up. :-)
B3NR4Y said:
And a question about tensors themselves, the book says cartesian tensors are the most important for physics so it will neglect other types, it defines a cartesian tensor like this
<br />
T=<br />
\left[ {\begin{array}{ccc}<br />
{T_{xx}} & {T_{xy}} & {T_{xz}} \\<br />
{T_{yx}} & {T_{yy}} & {T_{yz}} \\<br />
{T_{zx}} & {T_{zy}} & {T_{zz}} \\<br />
\end{array} } \right] = \sum_{ij} T_{ij} \vec{e_{i}} \otimes \vec{e_{j}}<br />
= T_{ij}\vec{e_{ij}}
It also says to note, which I did and thought was very interesting, that if you multiply this tensor (in matix form) by a vector in column matrix form, you get another column vector. So this had me thinking, if multiplying a tensor by a vector gives a transformation of a vector, why not just call tensors transformations? I know this a shallow look at tensor and I realize that you can multiply tensors together and get something that is not a vector so they aren't just transformations, but I can't quite conceptualize what tensors (of rank higher than 1) "mean", like I can with vectors.
Thanks for your time
I have to say, I don't really like texts that introduce tensors in terms of coordinates before talking about either geometry or linear algebra. The coordinate representation of a tensor is secondary to its nature. It is as if a physics text told you that a vector is an ordered list of real numbers and then never mentioned the fact that it could be regarded as a direction and magnitude in space, but then went on to talk about forces, derivatives, and so forth using only the algebra of ordered lists of numbers. What a nightmare that would be.
A tensor, in regards to physics, is a multilinear functional. That means it takes as input an ordered list of vectors, which may be from different vector spaces, and outputs a single real number. The function on each vector is linear, in the same way as you are familiar with linear transformations on single vectors.
A simple tensor, therefore, is one which takes as input two vectors, and outputs their dot product. It is important that we regard the dot product as related to the geometric vectors, not their coordinates, so that the formula for the dot product when using different coordinate systems must be adjusted to still equal the product of the magnitudes of the vectors multiplied by the cosine of the angle between them, not merely the list of products we get from a rectangular coordinate system. Therefore, the tensor is a geometric object: independent of any particular coordinate system. This is the point of using tensors in the first place. As you already know, the dot product is linear in each input, and it is a real number. A tensor which has only two arguments from the same vector space V (or a vector space and a copy of the same space, depending on how pedantic you want to be), is a map from VxV into R. As you should verify yourself, we may write all tensors like this using an nxn matrix if the vector space V has dimension n. You should also verify that the matrix form of this tensor with respect to rectangular coordinates for the space V is the nxn identity matrix. Use a particular value of n, like n = 2 or n = 3 to start.
A tensor which has three inputs from the same vector space is the triple scalar product, which returns the signed volume of the parallelepiped that can be constructed by three vectors, assuming they are in flat Euclidean space. Unlike tensors that have only two inputs, we would need 3 indices, one for each vector, in order to write the components of this tensor with respect to a particular coordinate system. So this is an example of a tensor that cannot be reduced to a simple matrix transformation. The determinant is another tensor, if we regard a matrix as an ordered list of row vectors or column vectors. As you study more, you will learn about the Riemann curvature tensor, which has 4 inputs, and no doubt many more.
I have greatly simplified this by only showing tensors that act on vectors, and not linear functionals. It is quite easy to turn the previous argument around and look at the actions of vectors on linear functionals as the operation, which makes vectors tensors as well. It turns out that, just like a dot product between two vectors is related to the action of a particular linear functional on the "second vector", we can consistently relate elements of dual vector spaces with particular elements from a vector space. When we do this, we either raise or lower the index related to that space to show that we are considering the related quantity (related through the Riesz representation theorem). Since we still want to regard the tensor as essentially the same geometric object, regardless of whether it is acting on vector spaces or their duals, and regardless of its coordinate representation, we call these musical isomorphisms. "Musical" because raising and lowering is whimsically related to sharpening or flattening a musical note.
So tensors encapsulate scalars, vectors, linear transformations, and multilinear transformations into one theory. I will try to find a good free intro text for you if you want to read one on the side.
Edit: Okay, this looks nice, and is probably much better than my rambling:
https://www.math.ethz.ch/education/bachelor/lectures/fs2014/other/mla/ma.pdf . :-)
Edit 2: This looks slightly better:
http://rbowen.tamu.edu/ . Of course, you can just search for "Multilinear algebra" and read as much as you want about tensors.