A suggested operational definition of tensors

In summary, the conversation discusses different definitions of tensors, with the focus on finding an operational definition that explains what tensors are and how they work. The proposed definition states that scalars are rank 0 tensors, while vectors and covectors are rank 1 tensors. Tensors of higher rank are defined as a linear map from tensors of lower rank, and can be expressed as a linear combination of dyadic basis tensors. The dot between basis vectors in a dyadic basis tensor represents the tensor product. This definition also allows for the construction of any rank tensor. The conversation also touches on the problem of understanding the transformation properties of tensors, and how this can be confusing due to the assumption that everyone knows how to multiply multi-dimensional arrays.
  • #1
Will Flannery
114
34
The two tensor definitions I'm (newly) familiar with, by transformation rules, and as a map from a tensor product space to the reals, don't tell me what a tensor does, and to the best of my knowledge they don't make it apparent. So, I'm looking for an operational definition, and suggesting the following one -

Scalars are rank 0 tensors.

For tensors of higher rank we start with a n-dimensional vector space V with basis e1, e2,... en and dual covector space V* with basis e1*, e2*,... en*. Vectors and covectors are rank 1 tensors.

A tensor of rank r>1 is a means of defining a linear map from tensors of rank m to tensors of rank n, where m and n are generally < r, but not necessarily, that is, the definition doesn't require it.

A tensor of rank r>1 is a linear combination of dyadic basis tensors of rank r, where a dyadic basis tensor has the form x1.x2.x3...xr where each xi is either a basis vector ej of V or a basis covector ej* of V*.

The product of two dyadic basis tensors x1.x2.x3...xr and y1.y2.y3...ys is computed by evaluating <xr,y1> and if that is not 0, evaluating <xr-1,y2>, and if that is not 0 continuing till one of the dyadic basis tensors (normally y1.y2...ys) is used up, if no 0 was produced the dyadic basis tensor that remains is the product, it's either a 1 or a dyadic basis tensor. Note when evaluating <xr-(k-1),yk> one must be a vector and one a covector, else it's an error.

The x dyadic basis tensor eats the y dyadic basis tensor till one of the nibbles is a 0 and the result is 0, or till one is used up and the result is 1 or the part of the x dyadic basis tensor that didn't get a bite (or the y dyadic basis tensor that didn't get bitten).

A tensor A maps tensor B by applying each of the dyadic basis tensors in A to each of the dyadic basis tensors in B and multiplying their coefficients when their product is not 0, and summing the resulting dyadic basis tensors to get the result of A applied to B.

I think this is how tensors work, and I think this definition does explicitly spell it out and make it clear. But I'd like to have it verified or corrected if necessary.
 
Mathematics news on Phys.org
  • #2
Questions:
- You seem to have declared one unknown (scalar) in terms of another unknown (rank 0 tensor). What is the definition of at least one of those?
- Same question about rank 1 tensors and vectors. What is the definition of at least one of those?
- Your definition seems to require a particular basis. Do tensors exist independent of basis? Hint: Yes. How can your definition accommodate that?
- What makes a vector a basis vector? What makes a basis vector a vector?
- What is a "dyadic basis tensor"? When you say "where a dyadic basis tensor has the form x1.x2.x3...xr" what does the dot between the basis vectors indicate?
- Can you show that your definition allows you to write the most general tensor in this form? For example, in 3-space, a rank 2 tensor has nine components. Does your basis allow for this?
- Can you recover the transformation properties of a tensor from your definition?
 
  • #3
1. A scalar is a number, I assume everyone knows this.
2. Everyone knows what a vector is. The point of my definition is not mathematical rigor and completeness, but understandability, and the right tradeoff between conciseness and completeness is important.
3. The definition does require a basis, then the component transformation rules must be derived to show it is basis independent.
4. Again, everyone knows what a basis vector is.
5. The dot was not defined, I now see that it is the usual tensor product, so no problem there. A dyadic tensor is a product of rank 1 tensors. You can add that the dyadic basis tensors form a basis for the tensor product vector space.
6. Sure, the definition is constructive, and you can construct any rank tensor, even a (up1, down 3, up 2, etc.) tensor.
7. Yes, it's straightforward.

Note: I wrestled with tensors for a long time, mostly just foolin, but when I got serious it still took me two weeks to figure them out, because the 2 popular definitions don't give you any idea of what they are (a way to define maps from tensor to tensor) or how they are evaluated (the <> operation is extended to dyadic basis tensors), even though they are perfectly fine definitions. So, I think my definition has merit.

Incidentally, the big problem was understanding that vectors were covariant and covectors were contravariant, because when you're only dealing with orthonormal coordinates they both transform the same way, and that's never pointed out, and no examples are ever given. So I'd do some examples and they always came out the same, and that was driving me crazy.

Another note: the definition by transformation rule, and other tensor literature, appear to me (I'm not certain here) to assume that everyone knows how to multiply multi-dimensional arrays, and I realized at some point I had no idea how to multiply multi-dimensional arrays and I still don't.
 
Last edited:

1. What are tensors and why are they important in science?

Tensors are mathematical objects that represent physical quantities with magnitude and direction. They are important in science because they provide a way to describe and analyze complex systems, such as fluid dynamics, electromagnetism, and general relativity.

2. How is an operational definition of tensors different from other definitions?

An operational definition of tensors is a definition that specifies how tensors are measured or observed in a particular system. This is in contrast to other definitions, which simply describe the properties and characteristics of tensors.

3. Can tensors be used in fields other than physics?

Yes, tensors have applications in various fields such as engineering, computer science, and economics. They are particularly useful in data analysis and machine learning, where they can be used to represent and manipulate large datasets.

4. Are there different types of tensors?

Yes, there are different types of tensors, including scalar tensors (which have magnitude but no direction), vector tensors (which have both magnitude and direction), and higher-order tensors (which have multiple components and are used to represent more complex physical quantities).

5. How is the operational definition of tensors useful in practical applications?

The operational definition of tensors allows for a concrete and measurable way to work with tensors in real-world situations. This can be helpful in experimental settings, where precise measurements are needed, as well as in simulations and modeling, where tensors can be used to accurately represent and analyze complex systems.

Similar threads

  • Linear and Abstract Algebra
Replies
7
Views
262
Replies
6
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
828
  • Quantum Physics
Replies
11
Views
1K
  • Quantum Physics
Replies
2
Views
1K
  • Advanced Physics Homework Help
Replies
5
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
913
  • General Math
Replies
1
Views
4K
  • Differential Geometry
Replies
3
Views
3K
Replies
2
Views
2K
Back
Top