# Where can i find tensor analysis expained physically in simplest words?

• G.S.RAMYA
In summary: A(..., e2) = x1A(1, 0;...)+ y1A(..., 1;...). These terms A(1, 0;...) and A(..., 1;...) are called covariant and contravariant components of A, respectively.If we choose a different basis (in the plane) say (2, 1) and (-1, 1), then the components of A will change for the same function A, even though the function itself stays the same. This is a complication. However, we can show that the components of A in the new basis are related to those in the Cartesian basis, for all bases, by...In summary, tensor analysis
G.S.RAMYA
Tensor analysis is backbone of General theory of relativity... But it is a difficult mathematical concept. How can i know more about tensor analysis not in just a pure mathematical way?

Hi,

Unfortunately, I don't think there is a way to fully understand the concept without mathematics.

If you don't already know what a tensor is, it's geometrical concept that explains the relationship between vectors. For example, a straight line gives two pieces of information, magnitude, and direction. So it is a second order tensor. In relativity Einstein used 4-tensors for the three dimensions of space and one for time.

For example, if you wanted to specify a location in spacetime, you would need to specify 4 things, the regular three directions, and then what time it would occur.

Though, if your willing to take the mathematics, here is an excellent explanation of tensors for physics put out by NASA:

http://www.grc.nasa.gov/WWW/k-12/Numbers/Math/documents/Tensors_TM2002211716.pdf

Last edited:
Tensors are mappings from vectors to real numbers. Take the metric tensor as an example. It takes two vectors as its arguments and returns a value which is the inner product between the two vectors. Mathematically you would write
$V \cdot U = g(V,U)$
where V and U are vectors and g is the metric tensor.

The metric tensor is called rank two tensor, because it maps two vectors to a number. Likewise a rank N tensor maps N vectors to a number. Vectors are rank 1 tensors and numbers are rank 0 tensors.

A rank N tensor has d^N degrees of freedom (numbers, which define the mapping) where d is the number of dimensions. Usually d=4. In this case, rank 0 tensor has 1 (duh, it's a number so you clearly need one number), rank 1 tensor has 4 (one number for each dimension, like giving the coordinates), rank 2 tensor has 16 and so on.

The reason why tensors are convenient to use in relativity is that you always know how they transform from one coordinate system to another.

Mark M said:
If you don't already know what a tensor is, it's geometrical concept that explains the relationship between vectors. For example, a straight line gives two pieces of information, magnitude, and direction. So it is a second order tensor. In relativity Einstein used 4-tensors for the three dimensions of space and one for time.
This is not correct. You are confusing the dimensionality of the mathematical object with its rank: in relativity, we use 4-vectors, not 4-tensors. The 4-vector is simply a 4 dimensional vector specifying, as you say, the 3 dimensions of space and 1 of time. All vectors, regardless of their dimension, are 1st-rank (or 1st-order, not 2nd-order) tensors. The rank of the tensor is given by the number of indices: for example, a matrix is a 2nd rank tensor because its elements can be represented as $a_{ij}$ where i labels the row and j labels the column. The rank of the tensor corresponds to the dimension of the vector space on which it acts.

To answer the OP, tensor analysis is indeed dry mathematics. The famous tome "Gravitation" by Misner, Thorne, and Wheeler expends quite a lot of effort describing tensors as intuitive "machines" that act on vectors. It's an at times exceedingly verbose text, but it might satisfy your needs.

Any reasonable intermediate or advanced mechanics text will cover tensors from a physical point of view. Books such as Symon or Goldstein. Or you could try texts on mathematical methods for physicists, such as Arfken.

http://www.infoocean.info/avatar1.jpg I don't think there is a way to fully understand the concept without mathematics.

Last edited by a moderator:
G.S.RAMYA said:
Tensor analysis is backbone of General theory of relativity... But it is a difficult mathematical concept. How can i know more about tensor analysis not in just a pure mathematical way?

Imagine a sailboat. The wind pushes against the sail, and that causes the sailboat to have a force in a certain direction. The important thing is that this force doesn't *have* to be in the same direction as a wind.

You can write a set of equations that describe how the force on the boat varies with the wind, and that looks like

Force = (magic box) * (wind direction)

The reason that tensors are important in GR, is that you can use these magic boxes for things like calculating distances.

That magic box is an matrix of numbers, and this is a tensor.

In linear algebra, you studied linear operators on vector spaces. These were functions that mapped vectors to vectors in a linear way, specified by the fact that if u and v were any two vectors in V and r was any scalar, then T is linear if T(ru + v) = rT(u) + T(v).
A physical example of a linear operator is rotation. If we rotate the resultant of two vectors (by drawing a parallelogram), we expect the result to be the same as if we rotated each individual vector by the same amount and then found the resultant.
A more general idea is that of a linear map from a vector space to the set of real numbers. Some real numbers we associate with geometric objects that are useful are volumes and areas. For the area of a parallelogram spanned by two vectors, it can be shown that if A(u, v) is the function giving us the area, it must be linear in u and linear in v separately. That is, A(ru + v, w) = rA(u, w) + A(v, w) and A(u, rv + w) = rA(u, v) + A(u, w). A is then called a multilinear map, which is shortened to tensor.
It should be apparent from linear algebra that since A is linear in each vector argument, once we choose a coordinate basis, we can write components for A, and these components will differ in different coordinate systems.
In particular, if we choose (1, 0) = e1 and (0, 1) = e2 as a basis with Cartesian coordinates, then A((x1, y1), (x2, y2)) = A(x1e1 + y1e1, x2e1 + y2e2) = x1A(e1, x2e1 + y2e2) + y1A(e2, x2e1 + y2e2) = x1x2A(e1, e1) + x1y2A(e1, e2) + y1x2A(e2, e1) + y1y2A(e2, e2).
If we reasonably define the 2-dimensional area between a vector and itself to be 0, then we get A(e1, e2) + A(e2, e1) = A(e1 + e2, e1 + e2) = 0, which means A(e1, e2) = -A(e2, e1).
By choosing to represent area with a tensor, we are forced into contemplating negative areas, which correspond to changing the order of the vectors. This is useful in determining the orientation of the vectors with respect to a chosen coordinate system.
In our chosen system, if we choose units such that A(e1, e2) = 1, we have all 4 components for A: A(e1, e1) = 0, A(e1, e2) = 1, A(e2, e1) = -1, and A(e2, e2) = 0. This can be listed more concisely as A11 = 0, A12 = 1, A21 = -1, and A22 = 0 if we wish to put A in matrix form.
With a volume tensor in 3 dimensions, we use 3 vectors, and thus have 3 indexes instead of 2, one index per vector. This, and other tensors that take more than two vectors, cannot be put into a flat 2-dimensional matrix form for this reason, and are best dealt with by their index form, which represents dealing with their components when put into a specific coordinate system.
When we move the parallelogram around the plane, attaching the vector tails to points other than the origin, the area of the square does not change. For more general spaces, it is not hard to see that on a curved surface, a parallelogram attached to each point may actually have a different area, and thus an area tensor field should assign a different area tensor to each point.
To specify A as an object (ie., area of parallelogram spanned by two vectors) independent of its target's location in the plane, it is thus best to use a tensor field: a function which assigns a tensor A to each point in the plane. For Cartesian coordinates, the tensor field A(p) is a constant: it returns the same tensor A with the same components at each point p. Physicists tend to blur the distinction between tensor fields and tensors, which can confuse the layperson consulting non-physics oriented mathematical texts.
Talking about parallelograms changing area when attached to different points requires us to have a concept of "the same vectors attached to different points", which is where we get the concept of tangent space to a point (which is what tensors act upon) and the machinery of parallel transport to say how the vectors in the tangent spaces at different points are connected. Flat Euclidean space has a flat connection: the components of a vector attached to (in the tangent space of) a point remains the same if we parallel transport it to (the tangent space of) a different point.
There is a bit more going on. Ie., raising and lowering indexes has to do with the dual tangent space, or covectors, and so on, and there are many tensors where you do not want to actually get the real number by giving it the proper number of vectors, but use them to transform vectors from one space to another, since in general, a tensor need not act on two vectors in the same vector space; it may act on many vectors in many different vector spaces. The important thing to remember is that a tensor is just a multilinear map, and thus once you know linear algebra, you already have a good understanding of tensor algebra.
One useful tensor is a tensor that tells us how a vector changes in a curved space when we move it around a small parallelogram using parallel transport connections for that space's curvature rules back to its original tangent space. For example, if you move a tangent vector around a small parallelogram on the surface of a sphere, maintaining its angle with each curve representing each side of the parallelogram on the sphere, the vector no longer points in the same direction when it goes back to its starting point. That would take 3 vectors: the original vector and the two vectors defining the parallelogram, and give us the resulting vector, which may be different from the original vector if the space is not flat. This particular tensor is the Riemann curvature tensor that you will be using a bit.
Any good text on differential geometry or general relativity should give you a more than adequate introduction to tensor analysis, which is a bit more involved, and in far more detail than can be reasonably typed in this post. I would recommend Spivak's texts for a complete view, but I also like "Differential Forms and Connections", although it may be a bit too pure mathematical. I've found Schutz's "A First Course in General Relativity" to have a great section on tensors and motivates the Riemann tensor and connections well.

Last edited:
twofish-quant said:
Imagine a sailboat. The wind pushes against the sail, and that causes the sailboat to have a force in a certain direction. The important thing is that this force doesn't *have* to be in the same direction as a wind.

You can write a set of equations that describe how the force on the boat varies with the wind, and that looks like

Force = (magic box) * (wind direction)

The reason that tensors are important in GR, is that you can use these magic boxes for things like calculating distances.

That magic box is an matrix of numbers, and this is a tensor.

That magic box does not need to be a tensor in this example though. Only if wind direction and force are vectors (meaning they behave correctly under rotations, etc) is the magic box a proper tensor.

## 1. What is tensor analysis?

Tensor analysis is a mathematical tool used to describe and analyze physical phenomena that involve multiple dimensions and directions. It helps us understand the behavior of objects and systems in a more complex and comprehensive way.

## 2. Why is tensor analysis important in physics?

Tensor analysis is important in physics because it allows us to describe and predict the behavior of physical systems with multiple dimensions and directions, such as fluid flow, electromagnetic fields, and general relativity. It also helps us understand the connection between different physical quantities and their transformations under different coordinate systems.

## 3. Can tensor analysis be explained in simple terms?

Yes, tensor analysis can be explained in simple terms by using analogies and examples from everyday life. For example, a tensor can be thought of as a multidimensional matrix that describes how a physical quantity changes as we move in different directions.

## 4. What are some real-world applications of tensor analysis?

Tensor analysis has many practical applications in fields such as engineering, physics, and computer science. It is used to study fluid dynamics, predict the behavior of materials under stress, and develop algorithms for image and signal processing.

## 5. How can I learn tensor analysis?

To learn tensor analysis, it is important to have a strong foundation in linear algebra and multivariable calculus. There are many online resources, textbooks, and courses available that can help you understand the concepts and applications of tensor analysis. It is also helpful to practice solving problems and applying tensor analysis to real-world scenarios.

Replies
11
Views
824
Replies
8
Views
2K
Replies
5
Views
2K
Replies
2
Views
2K
Replies
11
Views
2K
Replies
22
Views
2K
Replies
2
Views
1K
Replies
4
Views
1K
Replies
0
Views
592
Replies
7
Views
1K