True Algebraic Nature of Tensors

In summary, the author is seeking guidance on understanding the true algebraic nature of tensors. He has struggled with this issue for years and seeks able guidance. He thinks that the vector nature of abstract member vectors are not understood by their individual internal workings, but by their relationship to all other members of the space. He seeks to show that covariance and contravariance are simply different representations that are mapped from the same tensor space of some order. He believes that there is a set of tensor spaces, each of a different class, where inner and outer product and their various combinations are closed within this set of tensor spaces. Finally, he believes that tensor bases and class parameters may be specified such that he can map
  • #1
Rising Eagle
38
0
I have been puzzling over a best point of view to comprehend the true algebraic nature of tensors for years now.

With vector spaces, I similarly puzzled and concluded that vector spaces are basically sets of abstract members that satisfy a closure on linearity relationship (i.e., any linear space). I realized that mappings are used given some selection of basis vectors and coordinate system type (i.e., cartesian, cylindrical, etc) to map the abstract members to useful and manipulable objects such as ordered n-tuples, column matrices, polynomials, or any other object with which we can manipulate component coefficients. I always see the components as not the vectors themselves, but as just scale factors relative to the bases, which are themselves abstract unknowns. I concluded that the vector nature of abstract member vectors are not understood by their individual internal workings (which are usually inaccessible anyway), but by their relationship to all other members of the space. Of course one may assign a very manipulable type of object as the interpretation of the members (e.g., polynomials, matrices of nxn for some n, displacement vector), but the properties of the individual members of the space are not vector properties, as the vector properties lie in and are the sole domain of the space as a whole.

For tensors, my struggle is ongoing and I seek able guidance. I thought to define a tensor as a member of a linear space. Simple and complete. For this to work it must be proven that any member of any definable linear space is a tensor and vice versa. Is there such a proof or perhaps counter example proving my point of view to be in error?

I have also thought that it may be more correct to define tensor as any member from any definable inner product space. Can it be proven that any member of any definable inner product space is a tensor and vice versa?

This is the first step in a series of steps I wish to make to develop a cohesive understanding of tensors as an algebraic mathematical space.

My next step is to show that covariance and contravariance are simply different representations that are mapped from the same tensor space of some order such that that dual space and primal space (or any mixture of the two for order > 1) are just different faces of the same inner product space. Here I would like to show that there truly is no distinction between the set of linear functionals on a vector space and the vector space itself. Functionals, unfortunately inspire a visual of a math object that has an operand input and and output, whereas vector space has the role of a passive operand. However, any functional from the dual space when paired with a vector maps each vector in the primal space to a member of its underlying scalar field, and likewise, any vector from the primal space when paired with a functional maps each functional from the dual space to a member of its underlying scalar field (which is, by definition, the same scalar field in both spaces). This tells me that the two spaces truly are on equal footing (like the Bra and Ket of quantum mechanics). And if tensors are of necessity members of inner product spaces, I believe the connection between covariant and contravariant representations should be easier to prove and their presence would be more application related than as a fundamental property of the tensor space.

My next step is to show that there is a set of tensor spaces, each of a different class (class means the order of the tensors in the space where the dimensional cardinality of each ordering is specified) where inner and outer (tensor) product and their various combinations are closed within this set of tensor spaces. That is, it is an operation where a member from one tensor class is paired with a tensor from the same or another class to yield a tensor from either or a third class. Of course there are compatibility requirements that depend on any occurrences of inner product contractions when these operations are applied.

My final step is to understand how tensor bases (in analogy with that of vector spaces) and class parameters may be specified such that I can map an abstract tensor space of some given class and its full glory of tensor properties to manipulable objects which are amenable to analysis.

If there are other steps I must take to complete my cohesive understanding, I have not ascertained it/them yet. I wish to eliminate gaps in my understanding of how basic algebraic concepts develop into fully fledged mathematical objects used in diff geom and related theories in physics.

Any enlightenment relevant to any or all of the above paragraphs on tensors (and vectors too) is welcome.
 
Last edited:
Physics news on Phys.org
  • #2
Rising Eagle said:
For tensors, my struggle is ongoing and I seek able guidance. I thought to define a tensor as a member of a linear space. Simple and complete. For this to work it must be proven that any member of any definable linear space is a tensor and vice versa. Is there such a proof or perhaps counter example proving my point of view to be in error?

A given tensor is of course a member of the linear space of tensors of the same type, and it is true also that any member of a vector space can be thought of as a tensor acting on members of the dual space, but that ("member of a vector space") is not the definition of a tensor.

Tensor are linear operators, acting on elements of a vector space and its dual. Simple and complete. What do you find unacceptable about this definition?
 
  • #3
dx said:
that ("member of a vector space") is not the definition of a tensor.

Tensor are linear operators, acting on elements of a vector space and its dual. Simple and complete. What do you find unacceptable about this definition?

I don't know that I do find your suggested definition unacceptable. In fact, I think it's pretty good. I have thought of defining tensors as linear operators that act on other tensors and that to make it concrete, I would define zeroth order tensors as a scalar field and a first order tensor as a vector space. It's not so much that I ever decided against this approach as much as I never felt like it gave me full comprehension of the algebraic nature of tensors. It always felt like something important (some characteristic or property or behavior about tensors) is missing.

As for the red herring definition I gave, you state that it is not the definition of a tensor. I can accept your wisdom on this exceeds my own and that you are correct. Yet it is not clear to me why it isn't or cannot be the definition of a tensor. What properties does it lack that a tensor must have or what properties does it imply that a tensor cannot have? I believe that linear spaces lacking an inner product may prove to not satisfy the well established meaning and usage of tensors (this is just conjecture on my part). I therefore gave a second possible definition requiring an inner product be defined for all tensor spaces. If it is also wrong, I do not yet understand why. What does it lack that it needs or what does it imply that it shouldn't?

I don't wish to see tensors as separate from vectors in any way (although I'm not deeply learned enough yet to know if this is possible). But I also wish to be able to view both tensors and vectors as active objects that serve as both operator and operand as appropriate in whatever given specific analyses or derivations. And finally, I do not wish to leave the definition (whatever might turn out to be best) as the ending. I wish to create a full depiction in my mind of all the spaces that make up the tensor world and understand how they interact with each other.
 
  • #4
The space of non-linear operators on V is also a vector space, so the fact that an object belongs to a vector space does not make it a tensor. It can be a tensor, but on a different space.
.
 
Last edited:
  • #5
dx said:
The space of non-linear operators on V is also a vector space, so the fact that an object belongs to a vector space does not make it a tensor.

Ah. Excellent! You just taught me something important. You got me thinking about something I hadn't considered before. So, for example, a linear space of polynomials of nxn matrices for some n is a vector space, but whose members are nonlinear operators. I never considered that counterexample (and yet in my first post, polynomials were cited as one of my examples of a vector space!).

By the way, thank you very much for being the only one to roll up his sleeves and address my questions.

Elaborating further, this is a vector space which may even be enriched to be a ring with polynomial matrix multiplication (or perhaps even a full algebra or algebraic field if a polynomial matrix division may be defined - I wonder if that's possible).

As I think of it, I don't see how an inner product can be defined over such a space and I don't know that it can or cannot. If it cannot, it proves as a perfect counterexample for my first simple and complete definition. It does not rule out, however, my second definition which requires an inner product be defined. On the other hand, if an inner product can be defined for this example, it serves as a counterexample to my second proposed definition as well.

If Taylor series expansion for the members of any linear space of nonlinear operators applies, then there will always be a mapping from such a linear space to a linear space of polynomials and this latter linear space serves as a litmus test for my second definition. If such a linear space of polynomial forms can be enriched to have an inner product, my second definition falls. This is really good information.

dx said:
It can be a tensor, but on a different space.

Do you have an example or were you speaking in generalities in the sense that you have no reason to rule out this possibility? At this point, I do not see that a nonlinear operator can act linearly on some spaces, but not on others. Since you say that it can, I shouldn't rule this behavior out because it is a very pertinent consideration in what can and cannot be a tensor and under what circumstances. As such, it affects or has a hand in shaping the overall algebraic structure of the set of tensor spaces.
 
  • #6
Rising Eagle said:
Do you have an example or were you speaking in generalities in the sense that you have no reason to rule out this possibility? At this point, I do not see that a nonlinear operator can act linearly on some spaces, but not on others. Since you say that it can, I shouldn't rule this behavior out because it is a very pertinent consideration in what can and cannot be a tensor and under what circumstances. As such, it affects or has a hand in shaping the overall algebraic structure of the set of tensor spaces.
Technically, a non-linear operator on V, by definition acts on elements of V, not on elements of any other space. But there may be a natural way to associate with that operator another operator acting on a different space.

Let W be space of non-linear operators on V. Members of W, by definition are operators that act on members of V. But as a member of W, there is a natural way to construct a linear operator from it acting on the dual space W*:

Let x be a member of W, and ω a member of W*. Then we can define the action of x on ω as ω(x), which gives us a linear operator on W*

So x is a non-linear operator on V, but if we define its action on elements of W* as above, it is a linear operator on W*.
 
  • #7
dx said:
Let x be a member of W, and ω a member of W*. Then we can define the action of x on ω as ω(x), which gives us a linear operator on W*.

This is very good. So my next question is: can a dual space be defined for each and every possible linear space? If the answer is yes, does that imply that an inner product can always be defined as well?

This has relevance to two of my goals. The first is to counter example my second definition, which requires me to reorient my thinking of where tensors fit into the larger algebraic scheme. So far, I have learned from you that the set of all tensors is a proper subset of the set of all vectors (members of a linear space). If my second definition falls, they will also be a proper subset of the set of all members of all inner product spaces as well.

The second is a stronger understanding of the concrete relationship between inner products and dual spaces. More specifically, are they equivalent? Reciprocal coordinates are related as well and I believe a space of reciprocal coordinates is a dual to a space of coordinates as mapped from some vector space by designation of a basis. This leads me to think that the original vector space, with a different (reciprocal) choice of basis maps to a space of coordinates that is identical to the original dual just as easily. This tells me that dual space and normal coordinate space are actually the same space, just with different coordinate labels (coordinations). Somehow, though, this might be too strong of a claim. So I wish to understand why such would not be so.

I am aware that it is said that there is no difference for euclidean or cartesian spaces (because they have, by definition, orthogonal coordinate systems), but there is otherwise. I am not sure I fully understand this either. In some sense I do, because the space of coordinates are usually based on basis vectors aligned to tangent lines at the origin and the reciprocal coordinates are vectors perpendicular to surfaces (or volumes or whatever the n-1-D geometric objects are) defined by any pair (or more) of tangent lines at the origin. But coordinates don't define the space, they only label it, so other than the nature of measurement, I don't see a difference. Aren't they just different representations of the same space?

If this is so, I would like to make the claim that an abstract vector space (which has no designated basis) and its dual are reflections from an even more abstract precursor space. In my thinking, the primal and dual spaces, even without coordinates or designated bases, are still, somehow in some abstract sense, reciprocals of each other.

Thank you for your continued efforts to answer my questions.
 
  • #8
Rising Eagle said:
This is very good. So my next question is: can a dual space be defined for each and every possible linear space? If the answer is yes, does that imply that an inner product can always be defined as well?

Yes, the dual space always exists for any linear space, but I don't see what that has to do with inner products. For any real vector space, you can choose a basis and define an inner product of v and w as Σvnwn.
The second is a stronger understanding of the concrete relationship between inner products and dual spaces. More specifically, are they equivalent?

The dual space does not come equipped with an inner product, so I don't know what you mean by this. The original space also does not come equipped with an inner product, and one can potentially define many possible inner products on it, but the dual is unique.

If this is so, I would like to make the claim that an abstract vector space (which has no designated basis) and its dual are reflections from an even more abstract precursor space. In my thinking, the primal and dual spaces, even without coordinates or designated bases, are still, somehow in some abstract sense, reciprocals of each other.

In a sense, for finite dimensional vector spaces, the space and its dual are exactly the same as vector spaces. The dual of a 2 dimensional real vector space is also a 2 dimensional real vector space. But for infinite dimensional spaces, the dual can be different from the original space as a vector space.

What do you mean by 'reciprocal'? I have not come across that term in this context before.
 
Last edited:
  • #9
dx said:
Yes, the dual space always exists for any linear space, but I don't see what that has to do with inner products. For any real vector space, you can choose a basis and define an inner product of v and w as Σvnwn.

The dual space does not come equipped with an inner product, so I don't know what you mean by this. The original space also does not come equipped with an inner product, and one can potentially define many possible inner products on it, but the dual is unique.

Ok, so the dual space and inner product can be defined differently, but they seem to be principally the same thing. Your example of Σvnwn is one way to define an inner product. But as you said, one can potentially define many possible inner products on it. Consider 2-D real vector space V with bases b1 and b2. One can define the inner product by specifying b1*b1 = a, b1*b2 = b2*b1 = b, and b2*b2 = c for any arbitrary scalars a,b,c decided upon. Each combination of scalar values is a different inner product.

Now take the dual space W with bases f1() and f2() and whose elements are linear functionals over V where w(v) = d for some scalar d. The exact relationship of the dual is fully defined by f1(b1) = e, f1(b2) = f, f2(b1) = g, f2(b2) = h for any arbitrary scalars e,f,g,h. I'm not sure if there is a requirement here that f1(b2) = f2(b1), so I'll leave it as an open question for the moment. If it is required, then the form of the calculations of functionals over V or inner product over two elements of v are provably identical. Don't know it off the top of my head, but I think I may have seen such a proof before somewhere. Furthermore, this tells me a dual space is not unique, unless I have an incorrect definition of dual space (please correct me if I am using a wrong definition or doing something wrong).

Similar to the vector space V, I suppose that the space W also has an inner product as f1()*f1() = i, f1*f2 = f2()*f1() = j, and f2()*f2() = k for any arbitrary scalars i,j,k decided upon. Each combination of scalar values is a different inner product.

But I still keep looking at it and seeing the same form of calculation being defined three different times. It seems redundant. I keep wondering if any analysis needs any more than one of the three to be defined. If not and any analysis can be carried out with only one defined, I make the claim that dual and inner product are equivalent and defining one determines the others.


dx said:
In a sense, for finite dimensional vector spaces, the space and its dual are exactly the same as vector spaces. The dual of a 2 dimensional real vector space is also a 2 dimensional real vector space. But for infinite dimensional spaces, the dual can be different from the original space as a vector space.

At the moment, I am not ready to comfortably deal with the infinite dimensional case. Things get weird with infinities and axiom of choice arguments and I don't really get it at this point. But I like what you said about the finite dimensional case. I don't quite see it yet, but I would like to be able to visualize that an element of the original space has a counterpart in the dual such that they are two different points of view on the same element of one space.

From what I have seen, in the lingo of contravariant and covariant vectors, there is such an equivalence. If I am given a contravariant vector v, it is considered a tensor if it is coordinate independent such that v = v# for v in one coordinate system and the same v (but labeled v#) in another coordinate system. Contravariant representations and covariant representations are nothing more than different types of coordinate systems (as described in my previous post), so let v# be a covariant representation of v. But covariant vectors are associated with the dual space and the contravariant vector is associated with the original space. So if the two spaces are just different coordinate systems on the same space, they are nothing but different views on the same underlying space.

dx said:
What do you mean by 'reciprocal'? I have not come across that term in this context before.

The coordinate systems for v and v# are said to be reciprocal because b1*b1# = 1, b2*b2# = 1, b1*b2# = m, and b1#*b2 = n for arbitrary scalars m,n. See here also, that inner product is playing the central role in the definition of reciprocal. See also that inner product between contravariant and covariant vectors is the same as applying functionals from the dual space since the covariant vectors reside in the dual space. So we would have b1#(b1) = 1, b2#(b2) = 1, b2#(b1) = m, and b1#(b2) = n. And again, I think there might be the requirement that the cross terms have the same value (corresponding to symmetry in the inner product).

So I have learned from you that every vector space has a dual that can be defined, and every vector space has an inner product that can be defined. I recall the vector space of nonlinear operators you suggested. it can also have a dual and/or an inner product defined and behave as a linear operator within that context. And yet it will still have nonlinear action on some other space(s). Reflection tells me that any member of an inner product space is in fact a tensor with respect to its fellow members and members of its dual. It also tells me that being a linear operator is equivalent to being a member of a vector space in which an inner product and/or dual may be defined and that this is a good definition, except with the awareness that there are examples of tensors that have special cases of interactions where the members do not behave as tensors. Are there any flaws in this understanding? Now I would like to formalize the algebraic structure (in terms of sets of tensor spaces) that reflects this new understanding. Any suggestions?
 
  • #10
Rising Eagle said:
Ok, so the dual space and inner product can be defined differently, but they seem to be principally the same thing. Your example of Σvnwn is one way to define an inner product. But as you said, one can potentially define many possible inner products on it. Consider 2-D real vector space V with bases b1 and b2. One can define the inner product by specifying b1*b1 = a, b1*b2 = b2*b1 = b, and b2*b2 = c for any arbitrary scalars a,b,c decided upon. Each combination of scalar values is a different inner product.

Now take the dual space W with bases f1() and f2() and whose elements are linear functionals over V where w(v) = d for some scalar d. The exact relationship of the dual is fully defined by f1(b1) = e, f1(b2) = f, f2(b1) = g, f2(b2) = h for any arbitrary scalars e,f,g,h. I'm not sure if there is a requirement here that f1(b2) = f2(b1), so I'll leave it as an open question for the moment. If it is required, then the form of the calculations of functionals over V or inner product over two elements of v are provably identical. Don't know it off the top of my head, but I think I may have seen such a proof before somewhere. Furthermore, this tells me a dual space is not unique, unless I have an incorrect definition of dual space (please correct me if I am using a wrong definition or doing something wrong).

Similar to the vector space V, I suppose that the space W also has an inner product as f1()*f1() = i, f1*f2 = f2()*f1() = j, and f2()*f2() = k for any arbitrary scalars i,j,k decided upon. Each combination of scalar values is a different inner product.

But I still keep looking at it and seeing the same form of calculation being defined three different times. It seems redundant. I keep wondering if any analysis needs any more than one of the three to be defined. If not and any analysis can be carried out with only one defined, I make the claim that dual and inner product are equivalent and defining one determines the others.

I don't quite understand what you are claiming. The dual space is the vector space of linear operators on V, and an inner product is a bilinear operator, so they are not even the same type of object. You say defining either of them determines the other. What do you mean by this? Are you saying there is a 1-1 correspondence between the dual space and the possible inner products? Given a linear operator ω, you can define an inner product from ω as ω(u)ω(v), so a certain class of inner products can be put into 1-1 correspondence with the dual space, but this set does not exhaust all possible types of inner products since not every inner product is of the form ω(u)ω(v) with ω linear.
I don't quite see it yet, but I would like to be able to visualize that an element of the original space has a counterpart in the dual such that they are two different points of view on the same element of one space.

Since the two spaces have the same dimension (at least for finite dimensional spaces), you can always find a way to put the two sets in a 1-1 correspondence, but there is no unique or natural way to do this. There are many ways to construct such a correspondence.
 
Last edited:

1. What are tensors and what is their role in algebra?

Tensors are mathematical objects that describe the linear relationship between different sets of data. They play a crucial role in algebra by providing a framework for representing and manipulating multi-dimensional data.

2. How are tensors different from other mathematical objects?

Tensors are different from other mathematical objects, such as vectors and matrices, because they can handle data that is more complex and multi-dimensional. They also have specific algebraic properties, such as the ability to be transformed under coordinate changes.

3. Can you give an example of a real-world application of tensors?

Tensors are used in a variety of fields, such as physics, engineering, and computer science, to model and analyze data. For example, in physics, tensors are used to describe the stress and strain of materials, while in computer science, they are used in machine learning algorithms for image and speech recognition.

4. How do tensors relate to the concept of symmetry?

Tensors have a close relationship with symmetry, as they can be used to represent and analyze symmetrical properties of objects and systems. In fact, symmetries can be represented mathematically using tensors, and tensor operations can help identify and analyze symmetrical patterns in data.

5. Are there any limitations or challenges in working with tensors?

While tensors are a powerful tool in algebra, they can also be complex and challenging to work with. They require a solid understanding of linear algebra and can involve complicated calculations. Additionally, the size and complexity of tensors can make them difficult to visualize and interpret.

Similar threads

  • Differential Geometry
Replies
7
Views
2K
  • Linear and Abstract Algebra
Replies
7
Views
262
  • Linear and Abstract Algebra
Replies
10
Views
366
Replies
6
Views
1K
Replies
1
Views
1K
  • Linear and Abstract Algebra
Replies
32
Views
3K
  • Calculus and Beyond Homework Help
Replies
6
Views
1K
  • Special and General Relativity
2
Replies
38
Views
4K
  • Calculus and Beyond Homework Help
Replies
0
Views
454
  • Special and General Relativity
Replies
10
Views
2K
Back
Top