What topics from linear algebra do I need to study tensors?

In summary, a tensor of type (n,m) is a multilinear map from n dual vector spaces and m vector spaces to a field, with components that change according to a certain rule when the basis is changed. The dual space of a vector space is the space of all linear functionals on that vector space. Understanding the concept of co-variant and contra-variant components is important in understanding tensors and their applications in physics and pure mathematics.
  • #1
AdrianZ
319
0
Hi
What topics in linear algebra do I need to know to start learning tensors?
I know the following topics from linear algebra: 1-equation systems 2-vector spaces(linear independence, span, basis, important subspaces of a vector space) 3-linear transformations(kernel, Image, Isomorphic vector spaces) 4-Eigenvalues and Eigenvectors 5- Polynomials 6-determinants.
What other tools from linear algebra do I need to know to understand tensors? Where should I begin from to understand tensors little by little? and my final goal is understand the applications of tensors in Physics and then in pure mathematics.
 
Physics news on Phys.org
  • #2
AdrianZ said:
Hi
What topics in linear algebra do I need to know to start learning tensors?
I know the following topics from linear algebra: 1-equation systems 2-vector spaces(linear independence, span, basis, important subspaces of a vector space) 3-linear transformations(kernel, Image, Isomorphic vector spaces) 4-Eigenvalues and Eigenvectors 5- Polynomials 6-determinants.
What other tools from linear algebra do I need to know to understand tensors? Where should I begin from to understand tensors little by little? and my final goal is understand the applications of tensors in Physics and then in pure mathematics.

you know enough. feel free to ask questions
 
  • #3
AdrianZ said:
Hi
What topics in linear algebra do I need to know to start learning tensors?
I know the following topics from linear algebra: 1-equation systems 2-vector spaces(linear independence, span, basis, important subspaces of a vector space) 3-linear transformations(kernel, Image, Isomorphic vector spaces) 4-Eigenvalues and Eigenvectors 5- Polynomials 6-determinants.
What other tools from linear algebra do I need to know to understand tensors? Where should I begin from to understand tensors little by little? and my final goal is understand the applications of tensors in Physics and then in pure mathematics.

What kind of objects are you talking about? Are they standard finite-rank vector spaces, or something like a Hilbert space?
 
  • #4
You really just need to understand linear functions, matrices, linear independence and bases. Make sure that you understand the relationship between linear functions and matrices described e.g. in post #3 in this thread.
 
  • #5
the basic concept in linear algebra is linear functions. tensors are one step up, and concern multilinear functions. i.e. tensors are a topic in multilinear algebra.

a function is multilinear, if it is linear in each variable separately, so indeed linearity is the only prerequisite.the basic example of a multilinear function is multiplication. so tensors are a generalization of multiplication, except the multiplication may not be commutative.
 
  • #6
lavinia said:
you know enough. feel free to ask questions
well, here are the questions that I'm struggling with for now: What is a dual space? (And why we need it?) What is a co-variant/contra-variant component of a tensor?

This is how I understand tensors: A tensor is a mapping from the Cartesian product of m vector spaces and n dual vector spaces to a field like real numbers which is linear in each of its components. such a tensor would be of rank m+n and would have m contravariant/covariant components and n covariant/contravariant components. This is one of the many definitions of tensors that I've seen.

chiro said:
What kind of objects are you talking about? Are they standard finite-rank vector spaces, or something like a Hilbert space?
I meant finite vector spaces over an arbitrary field.

Fredrik said:
You really just need to understand linear functions, matrices, linear independence and bases. Make sure that you understand the relationship between linear functions and matrices described e.g. in post #3 in this thread.

I fully understand linear independence, spans and bases. I'm also confident doing calculations with matrices.

mathwonk said:
the basic concept in linear algebra is linear functions. tensors are one step up, and concern multilinear functions. i.e. tensors are a topic in multilinear algebra.

a function is multilinear, if it is linear in each variable separately, so indeed linearity is the only prerequisite.


the basic example of a multilinear function is multiplication. so tensors are a generalization of multiplication, except the multiplication may not be commutative.

well, I guess a linear function is just another term for a linear transformation from one vector space to another. Is that right?

for now my real confusion arises from the meaning of the dual of a vector space. then I need to understand what co/contra variant vectors are and why we're interested in them.
 
  • #7
AdrianZ said:
I fully understand linear independence, spans and bases. I'm also confident doing calculations with matrices.
Excellent. You should make sure that you also understand the relationship between linear operators and matrices. The fastest way to do that if you don't know it already is to read the post I linked to in my previous post. You need this to understand how the concept of "components of a tensor" generalizes the concept of "components of a linear operator".

AdrianZ said:
well, I guess a linear function is just another term for a linear transformation from one vector space to another. Is that right?
Yes, the terms linear function, linear map, linear operator and linear transformation, all mean the same thing in most books. Some authors only use the term operator when the domain and codomain are the same vector space. A linear functional is a linear map from a vector space over a field F into F.

AdrianZ said:
What is a dual space? (And why we need it?) What is a co-variant/contra-variant component of a tensor?
...
for now my real confusion arises from the meaning of the dual of a vector space. then I need to understand what co/contra variant vectors are and why we're interested in them.
This post should be useful. You should also check out the three posts I link to at the end of it, and also this one.
 
  • #8
Instead of posting that last link, I should have just quoted the relevant section.
Fredrik said:
Let V be a finite-dimensional vector space, and V* its dual space. A tensor of type (n,m) is a multilinear map [tex]T:\underbrace{V^*\times\cdots\times V^*}_{\text{$n$ factors}}\times\underbrace{V\times\cdots\times V}_{\text{$m$ factors}}\rightarrow\mathbb R.[/tex] The components of the tensor in a basis [itex]\{e_i\}[/itex] for V are the numbers [tex]T^{i_1\dots i_n}{}_{j_1\dots j_m}=T(e^{i_1},\dots,e^{i_n},e_{j_1},\dots,e_{j_m}),[/tex] where the [itex]e^i[/itex] are members of the dual basis of [itex]\{e_i\}[/itex]. The multilinearity of T ensures that the components will change in a certain way when you change the basis. The rule that describes that change is called "the tensor transformation law".
I just wanted to include a definition of the components of a tensor.
 
  • #9
Fredrik said:
Excellent. You should make sure that you also understand the relationship between linear operators and matrices. The fastest way to do that if you don't know it already is to read the post I linked to in my previous post. You need this to understand how the concept of "components of a tensor" generalizes the concept of "components of a linear operator".
This is how I understand the relationship between linear transformations and matrices:
Suppose We have two vector spaces V and W and there is a mapping between them T: V → W that keeps linearity intact which means that for all v[itex]\in[/itex]V, w[itex]\in[/itex]W and the scalars α & β we have: T(αv+βw) = αT(v) + βT(w).
If we apply this linear transformation to a vector in V in a coordinate system equipped with the bases {ei} we get:
T(α1e12e2+...+αnen) = α1T(e1) + α2T(e2) +...+ αnT(en)
Now we can associate a matrix to this linear Transformation in the following way:
Let's show a vector v in V by an n*1 matrix (assuming it has n dimensions) and a vector w in W by an m*1 matrix(assuming it has m dimensions). then we can define the matrix A such that w=Av. the matrix A will be of the rank m*n. the columns of A will be the T(ei)'s (the {ei}'s are a basis for V, and the corresponding T(ei)'s will form a basis for W). Hence we can write a vector w in W as a linear combination of the components of a vector v in V using the matrix multiplication Av. conversely, If you give me an equation like w=Av I can write it in the form T(α1e12e2+...+αnen) = α1T(e1) + α2T(e2) +...+ αnT(en).

That's how I understand the relationship between linear transformations and matrices. correct me if I'm wrong.
Yes, the terms linear function, linear map, linear operator and linear transformation, all mean the same thing in most books. Some authors only use the term operator when the domain and codomain are the same vector space. A linear functional is a linear map from a vector space over a field F into F.
Thanks.

This post should be useful. You should also check out the three posts I link to at the end of it, and also this one.

I checked the thread, but the OP's question is not so understandable for me.

Fredrik said:
Instead of posting that last link, I should have just quoted the relevant section.
I just wanted to include a definition of the components of a tensor.

Doesn't that confirm the way I defined a tensor?
I'm still struggling with the concept of a dual space and dual bases. I don't understand what they are and why they are defined. (in fact I've seen very different definitions of a dual space which makes it only more confusing for me)
 
Last edited:
  • #10
the dual space V* for a vector space V sounds imposing, but it's really a pretty simple idea.

suppose we use the standard basis:

{e1,e2,...,en} for V, where

ej = (0,0,0,...,1,...,0,0) <---1 in the j-th place.

the dual basis is the basis:

12,...,φn} where

φi(ej) = δij, the kroenecker delta.

you already know this basis, the dual basis element:

φi(x1,x2,...,xn) = xi

is the just the i-th projection function.

(note: in tensor notation the coordinate indices of the vector x are usually written "up",

x = Ʃ xjej, so I'm breaking with tradition just for this post).

now, how do we normally calcuate a linear functional f: V→F?

we use linearity: f(v) = f(a1e1+a2e2+...+anen) = a1f(e1)+a2f(e2) +...anf(en).

now the f(ej) are just field elements, which we can regard as the "coordinates" of f in V*, and each aj can be regarding as φj(v).

so f(v) = f(e11(v)+f(e22(v)+...f(enn(v)

if we abbreviate f(ei) by fi, using the linearity of the projection functions we get:

f(v) = (f1φ1+f2φ2+...+fnφn)(v)

(again, i note that it is traditional to write the "coordinates" of a linear functional "down", i am just writing it this way for clarity's sake in this one post)

in other words, we can write f = f1φ1+f2φ2+...+fnφn, a linear functional in V* is just a linear combination of the projection functionals dual to our original basis for V.

that is, given a basis (a special subset of V, sort of the "instant coffee" description of V...just "add coordinates"), the projection functions that pick out the j-th coordinate in that basis, are the corresponding "special subset" of V* (the dual basis).

so, in the real plane, for example, we can describe any linear function from RxR→R, as a linear combination of "projection onto the x-axis" and "projection onto the y-axis".

you have seen these dual vectors before in another setting, although you may not have realized it. if we regard "x" as meaning the FUNCTION:

f(x,y) = x, then f' is the linear function with matrix [1 0], that is:

f'(a,b) = 1(a) + 0(b) = a

so df(a,b) (the differential of f at (a,b)) is just φ1(a,b), dual to the tangent vector (1,0)(a,b) (the unit vector in the x-direction with its start point at (a,b)). this is the actual meaning of the symbol "dx" in such expressions as:

[tex]\int P dy + Q dx[/tex]

(hopefully this explains why "dx" isn't a number...it's a functional, you need to "hit it with a vector" to get a number out of it)

*********

tensors can be understood in terms of the tensor product. a good introduction to that is here:

http://www.dpmms.cam.ac.uk/~wtg10/tensors3.html
 
  • #11
Deveno said:
the dual space V* for a vector space V sounds imposing, but it's really a pretty simple idea.

suppose we use the standard basis:

{e1,e2,...,en} for V, where

ej = (0,0,0,...,1,...,0,0) <---1 in the j-th place.

the dual basis is the basis:

12,...,φn} where

φi(ej) = δij, the kroenecker delta.
well, I don't know what that notation means.φi(ej) = δij. Does it mean that the inner product of φi and ei should be equal to kronecker delta or it means that if φ acts on ei it'll produce a scalar? I guess the later is true. if that's true, then can we define the dual space of a vector space V as the space spanned by the basis
12,...,φn} such that if any of them acts on a basis vector from V it produces a scalar number? (either 0 or 1).

now, how do we normally calcuate a linear functional f: V→F?

we use linearity: f(v) = f(a1e1+a2e2+...+anen) = a1f(e1)+a2f(e2) +...anf(en).

now the f(ej) are just field elements, which we can regard as the "coordinates" of f in V*, and each aj can be regarding as φj(v).
the f(ej)'s are just field elements? I thought they were vectors? I start getting confused from this point.
 
  • #12
AdrianZ said:
This is how I understand the relationship between linear transformations and matrices:
Suppose We have two vector spaces V and W and there is a mapping between them T: V → W that keeps linearity intact which means that for all v[itex]\in[/itex]V, w[itex]\in[/itex]W and the scalars α & β we have: T(αv+βw) = αT(v) + βT(w).
If we apply this linear transformation to a vector in V in a coordinate system equipped with the bases {ei} we get:
T(α1e12e2+...+αnen) = α1T(e1) + α2T(e2) +...+ αnT(en)
Now we can associate a matrix to this linear Transformation in the following way:
Let's show a vector v in V by an n*1 matrix (assuming it has n dimensions) and a vector w in W by an m*1 matrix(assuming it has m dimensions). then we can define the matrix A such that w=Av. the matrix A will be of the rank m*n. the columns of A will be the T(ei)'s (the {ei}'s are a basis for V, and the corresponding T(ei)'s will form a basis for W). Hence we can write a vector w in W as a linear combination of the components of a vector v in V using the matrix multiplication Av. conversely, If you give me an equation like w=Av I can write it in the form T(α1e12e2+...+αnen) = α1T(e1) + α2T(e2) +...+ αnT(en).

That's how I understand the relationship between linear transformations and matrices. correct me if I'm wrong.
This is about half the story, but it also contains a mistake. If dim W>dim V, {T(ei)} doesn't have enough members to be a basis for W. Compare it to what I said:
Fredrik said:
Suppose that [itex]A:U\rightarrow V[/itex] is linear, that [itex]\{u_j\}[/itex] is a basis for U, and [itex]\{v_i\}[/itex] is a basis for V. Consider the equation y=Ax, and expand in basis vectors. [tex]y=y_i v_i[/tex][tex]Ax=A(x_j u_j)=x_j Au_j= x_j (Au_j)_i v_i[/tex] I'm using the Einstein summation convention: Since we're always supposed to do a sum over the indices that appear exactly twice, we can remember that without writing any summation sigmas (and since the operator is linear, it wouldn't matter if we put the summation sigma to the left or right of the operator). Now define [itex]A_{ij}=(Au_j)_i[/itex]. The above implies that [tex]y_i=x_j(Au_j)_i=A_{ij}x_j[/tex] Note that this can be interpreted as a matrix equation in component form. [itex]y_i[/itex] is the ith component of y in the basis [itex]\{v_i\}[/itex]. [itex]x_j[/itex] is the jth component of x in the basis [itex]\{u_j\}[/itex]. [itex]A_{ij}[/itex] is row i, column j, of the matrix of A in the pair of bases [itex]\{u_j\}[/itex], [itex]\{v_i\}[/itex].
This is the whole story. Note that I'm not just saying that there's a matrix equation corresponding to y=Ax. I'm also saying exactly what the components are, given a basis for U and a basis for V. In particular, this means that the components of a linear functional T in a basis {e_i} are just T(e_i). Compare that to the definition of the components of a tensor, and you will see that it's the same idea applied to multilinear functionals.

AdrianZ said:
I checked the thread, but the OP's question is not so understandable for me.
If you read the the first paragraph of my post, you will see that I didn't understand it either, and wrote a reply that ignores everything he said. You can skip the first two paragraphs of my reply and start reading at "A manifold is a set with some other stuff defined on it, the most important being coordinate systems". The post gives you an overview of the basics of differential geometry, and the posts it links to provide some of the details.

AdrianZ said:
Doesn't that confirm the way I defined a tensor?
Yes. I didn't include that to show you the definition of a tensor, but to show you the definition of the components of a tensor. (You said that tensors have components, but you didn't say what they are). I wanted to include such a definition because I quickly skimmed through the old posts that I suggested that you read, and didn't see one in there.

AdrianZ said:
I'm still struggling with the concept of a dual space and dual bases. I don't understand what they are and why they are defined. (in fact I've seen very different definitions of a dual space which makes it only more confusing for me)
What other definitions have you seen? I don't think I've seen another one. The word "continuous" or "bounded" is sometimes thrown in there, as in "V* is the set of all bounded linear functions from V into ℝ". This addition to the definition is only relevant when V is infinite-dimensional. A linear function is continuous if and only if it's bounded, and as long as we're dealing with finite-dimensional spaces, all linear functions are continuous (if we use the isomorphism between V and ℝn, and the standard norm on ℝ, to define a norm on V).
 
Last edited:
  • #13
AdrianZ said:
well, I don't know what that notation means.φi(ej) = δij. Does it mean that the inner product of φi and ei should be equal to kronecker delta or it means that if φ acts on ei it'll produce a scalar? I guess the later is true. if that's true, then can we define the dual space of a vector space V as the space spanned by the basis
12,...,φn} such that if any of them acts on a basis vector from V it produces a scalar number? (either 0 or 1).
Yes (to everything except the first guess about inner products).

AdrianZ said:
the f(ej)'s are just field elements? I thought they were vectors? I start getting confused from this point.
He used the word "functional" to indicate that the codomain of f is ℝ (the field that was used in the definition of the vector space). ℝ is a 1-dimensional vector space, so you could call its members "vectors" if you want to, but as you know, that would be unconventional.
 
  • #14
AdrianZ said:
well, I don't know what that notation means.φi(ej) = δij. Does it mean that the inner product of φi and ei should be equal to kronecker delta or it means that if φ acts on ei it'll produce a scalar? I guess the later is true. if that's true, then can we define the dual space of a vector space V as the space spanned by the basis
12,...,φn} such that if any of them acts on a basis vector from V it produces a scalar number? (either 0 or 1).


the f(ej)'s are just field elements? I thought they were vectors? I start getting confused from this point.

φi(ej) = δij means that:

φi(ej) = 0 if i ≠ j,

φi(ei) = 1.

this is pretty obvious, the i-th coordinate of ej is 0, unless i = j (that's how we define the standard basis).

it's not an "inner product", φi is a function that when given a vector, returns its i-th coordinate (which is a field element). so, in R3 for example:

φ1(1,2,4) = 1
φ2(1,2,4) = 2
φ3(1,2,4) = 4

so, yeah, φi(ej) is a scalar, and φi is defined so that it kills every basis vector except ei, and for ei, it gives the i-th coordinate, which is...1 (it's the only non-zero coordinate ei has).

in R3, the dual basis to {(1,0,0),(0,1,0),(0,0,1)} is the 3 linear functionals:

φ1(x,y,z) = x
φ2(x,y,z) = y
φ3(x,y,z) = z.

any linear functional on R3 can be written as a linear combination of these 3.

for example, suppose f(x,y,z) = 3x + 2(y-z). that's a perfectly good linear functional.

we can write f = 3φ1 + 2φ2 - 2φ3, so in the dual space, with this particular dual basis, we have f = (3,2,-2) (just remember, this isn't really a "vector" but a way to specify WHICH linear functional we mean: take 3 times the first coordinate of a vector, then add 2 times the 2nd coordinate, then subtract 2 times the third coordinate, which is a perfectly good way to get a number from a vector).

and of course the f(ej) are scalars! what does f do? it spits out a number, when you input a vector. well, a basis vector is still a vector, so when we apply a linear functional to a vector, we get a scalar.
 
  • #15
AdrianZ said:
well, I don't know what that notation means.φi(ej) = δij. Does it mean that the inner product of φi and ei should be equal to kronecker delta or it means that if φ acts on ei it'll produce a scalar? I guess the later is true. if that's true, then can we define the dual space of a vector space V as the space spanned by the basis
12,...,φn} such that if any of them acts on a basis vector from V it produces a scalar number? (either 0 or 1).
I just realized that what you said suggests that you may have misunderstood the term "scalar". It doesn't mean "either 0 or 1". Also, regarding your suggested definition of the dual space, I don't understand the "such that" part of it. If that's supposed to be a definition of the φi, it's incomplete.
 
  • #16
so, if I've understood it correctly, the dual space is nothing but a concept that arises from the linear 'functionals' on a vector space? (I didn't know what a functional meant before this thread). Can I say that any vector space has a corresponding dual space consisting of all functionals on V?so, the key to understanding dual spaces is to understand what linear functionals are. right?

Fredrik said:
I just realized that what you said suggests that you may have misunderstood the term "scalar". It doesn't mean "either 0 or 1". Also, regarding your suggested definition of the dual space, I don't understand the "such that" part of it. If that's supposed to be a definition of the φi, it's incomplete.
I meant any scalars, not necessarily 0 and 1. my confusion arises from this part:

Deveno said:
φi(ej) = δij means that:

φi(ej) = 0 if i ≠ j,

φi(ei) = 1.

this is pretty obvious, the i-th coordinate of ej is 0, unless i = j (that's how we define the standard basis).

it's not an "inner product", φi is a function that when given a vector, returns its i-th coordinate (which is a field element). so, in R3 for example:

φ1(1,2,4) = 1
φ2(1,2,4) = 2
φ3(1,2,4) = 4
We say that φi(ej)=δij. from that I understand that the only possible values for φ are 0 and 1. so, when I see the example, I see something totally different. It says φ3(1,2,4) = 4. the idea is pretty obvious, but I don't understand how φi(ej)=δij explains this.
so, if φi(ej)=δij, then δij is either 0 or 1. How can φ3(1,2,4) be equal to 4?
 
  • #17
the standard basis vectors ej are special, they only have 0 or 1 coordinates.

so the dual basis to them, acting on them, will only return a 0 or 1 value. this doesn't mean that {0,1} is the range of the dual basis, in general a vector will have any possible field element as coordinate values.

we just define the dual basis to the standard basis by:

1) defining the values on a basis set
2) extending by linearity.

for example, earlier i wrote φ2(1,2,4) = 2. let's expand this a bit:

φ2(1,2,4) = φ2((1,0,0) + (0,2,0) + (0,0,4))

= φ2(1(1,0,0) + 2(0,1,0) + 4(0,0,1))

= φ2(1(1,0,0)) + φ2(2(0,1,0)) + φ2(4(0,0,1)) (by linearity!)

= 1(φ2(1,0,0)) + 2(φ2(0,1,0)) + 4(φ2(0,0,1)) (by linearity again!)

= (1)(0) + 2(1) + 4(0) (using the dual basis kroenecker delta, it's only 1 when the indices match)

= 0 + 2 + 0 = 2 (using...um, addition of real numbers).
 
  • #18
Yaayy! I guess I've understood what a dual basis is. now a dual space is the space spanned by that basis? If yes then I've understood what a dual space is.

now, what are co-variant and contra-variant vectors? How do they arise in math/physics?
 
  • #19
In the simplest non-trivial case [itex]V=\mathbb R^2[/itex], [itex]\phi^i(e_j)=\delta^i_j[/itex] are the four equalities [tex]\begin{align}
\phi^1(e_1) &=1\\
\phi^1(e_2) &=0\\
\phi^2(e_1) &=0\\
\phi^2(e_2) &=1
\end{align}[/tex] Since the [itex]\phi^i[/itex] are assumed to be linear, we have [itex]\phi^i(v)=\phi^i(v^je_j)=v^j\phi^i(e_j)=v^j\delta^i_j=v^i[/itex], and in particular [itex]\phi^1(ae_1+be_2)=a\phi^1(e_1)+b\phi^1(e_2)=a[/itex].

AdrianZ said:
now, what are co-variant and contra-variant vectors? How do they arise in math/physics?
I explained that in the post I linked to earlier. The one that made you say that you didn't understand the OP's question.

AdrianZ said:
now a dual space is the space spanned by that basis?
Yes, but that would be a weird way to define it. V* is the set of linear functionals from V into ℝ. Addition and scalar multiplication are defined in the obvious ways:

(f+g)(v)=f(v)+g(v)
(af)(v)=a(f(v))

These definitions give V* the structure of a vector space. Let f in V* and v in V be arbitrary. We have [itex]f(v)=f(v^i e_i)=v^if(e_i)=\phi^i(v)f(e_i)[/itex]. Since v was arbitrary, this means that [itex]f=f(e_i)\phi^i[/itex]. Since f was arbitrary, this means that [itex]\{\phi^i\}[/itex] spans V*.
 
Last edited:
  • #20
great. now I have almost understood the idea behind defining dual spaces and also the idea behind covariant and contravariant vectors. your post about co/contra-variant vectors was a great one.

now the only thing remained unclear for me is that I want to see some mathematical/physical examples of tensors. and I want to understand how we can interpret different physical situations with the new language I've learned. I have almost 0 level of knowledge about differential geometry, so don't go into very advanced topics from physics please (like GR).

Is there any other thing that I need to know about tensors?
 

1. What is the relationship between linear algebra and tensors?

Linear algebra is a branch of mathematics that deals with vector spaces and linear transformations. Tensors are mathematical objects that generalize vectors and matrices, and they can be represented using linear algebra concepts such as vectors, matrices, and linear transformations.

2. Do I need to have a strong background in linear algebra to study tensors?

Yes, a strong understanding of linear algebra is necessary for studying tensors. Tensors are built upon the concepts of vectors and matrices, so a solid foundation in linear algebra is essential for understanding the properties and operations of tensors.

3. Which specific topics from linear algebra should I focus on for studying tensors?

Some important topics from linear algebra for studying tensors include vector spaces, linear transformations, eigenvalues and eigenvectors, and matrix operations such as matrix multiplication and inverse. It is also helpful to have knowledge of multivariable calculus and coordinate systems.

4. How are tensors used in real-world applications?

Tensors have a wide range of applications in fields such as physics, engineering, and computer science. They are used to represent physical quantities and their transformations in three-dimensional space, and they are also used in machine learning and data analysis for tasks such as image recognition and natural language processing.

5. Are there any resources available for learning about tensors from a linear algebra perspective?

Yes, there are many online resources and textbooks that cover tensors from a linear algebra perspective. Some popular textbooks include "Tensor Calculus" by John Lighton Synge and Alfred Schild, and "Introduction to Tensor Calculus" by Taha Sochi. There are also many online courses and tutorials available for self-study.

Similar threads

  • Linear and Abstract Algebra
Replies
10
Views
354
  • Linear and Abstract Algebra
Replies
7
Views
243
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
1
Views
823
  • Linear and Abstract Algebra
Replies
1
Views
893
Replies
2
Views
1K
  • Linear and Abstract Algebra
Replies
8
Views
876
  • Linear and Abstract Algebra
Replies
5
Views
2K
  • Linear and Abstract Algebra
Replies
2
Views
926
  • Linear and Abstract Algebra
Replies
3
Views
1K
Back
Top