Confusion with Tensors: Understanding Finite-Dimensional Vector Spaces

  • Thread starter Thread starter JonnyG
  • Start date Start date
  • Tags Tags
    Confusion Tensors
JonnyG
Messages
233
Reaction score
45
First let me give the definition of tensor that my book gives:

If V is a finite dimensional vector space with dim(V) = n then let V^{k} denote the k-fold product. We define a k-tensor as a map T: V^{k} \longrightarrow \mathbb{R} such that Tis multilinear, i.e. linear in each variable if all the other variables are held fixed. I know there are more general definitions but since this is the one I am using in my book, let's stick with this one.

Okay, now here is my problem. First off, assume from this point on that \mathbb{R}^n has the usual basis. If L^{2}(\mathbb{R}^{n}) is the set of all 2-tensors on \mathbb{R}^n then it has a dimension of n^2. If M(n,n) is the set of all n by n matrices with real entries then we have L^2(\mathbb{R}^n) \cong M(n,n).

However, let L(\mathbb{R}^n, \mathbb{R}^n) be the set of all linear transformations f: \mathbb{R}^n \longrightarrow \mathbb{R}^n. It seems obvious to me that L(\mathbb{R}^n, \mathbb{R}^n) \cong M(n,n). But then this would imply that L^2(\mathbb{R}^n) \cong L(\mathbb{R^n}, \mathbb{R}^n) which is impossible since dim(L^2(\mathbb{R}^n)) = n^2 and dim(L(\mathbb{R}^n, \mathbb{R}^n)) = n (I proved its dimension is n and I am sure the proof is correct).

Where have I gone wrong?
 
Physics news on Phys.org
The correspondence between ##L(\mathbb R^n,\mathbb R^n)## and ##M(n,n)## is bijective. So these vector spaces must have the same dimension.
 
That's what I thought as well. So this must just mean my "proof" that dim(L(\mathbb{R}^n, \mathbb{R}^n)) = n is incorrect.
 
Yes, I agree.
 
JonnyG said:
First let me give the definition of tensor that my book gives:

If V is a finite dimensional vector space with dim(V) = n then let V^{k} denote the k-fold product. We define a k-tensor as a map T: V^{k} \longrightarrow \mathbb{R} such that Tis multilinear, i.e. linear in each variable if all the other variables are held fixed. I know there are more general definitions but since this is the one I am using in my book, let's stick with this one.

Okay, now here is my problem. First off, assume from this point on that \mathbb{R}^n has the usual basis. If L^{2}(\mathbb{R}^{n}) is the set of all 2-tensors on \mathbb{R}^n then it has a dimension of n^2. If M(n,n) is the set of all n by n matrices with real entries then we have L^2(\mathbb{R}^n) \cong M(n,n).

However, let L(\mathbb{R}^n, \mathbb{R}^n) be the set of all linear transformations f: \mathbb{R}^n \longrightarrow \mathbb{R}^n. It seems obvious to me that L(\mathbb{R}^n, \mathbb{R}^n) \cong M(n,n). But then this would imply that L^2(\mathbb{R}^n) \cong L(\mathbb{R^n}, \mathbb{R}^n) which is impossible since dim(L^2(\mathbb{R}^n)) = n^2 and dim(L(\mathbb{R}^n, \mathbb{R}^n)) = n (I proved its dimension is n and I am sure the proof is correct).

Where have I gone wrong?

I think you need to specify the target space of ##L^2 (\mathbb R^n, \mathbb R^n) ## , i.e., bilinear maps into what space? If it is into the base field ## \mathbb R ## of dimension 1 as a v.space over itself, then the dimension is ## n^2## as you said, if the target space is another vector space V, then ## Dim L^2 (\mathbb R^n , \mathbb R^n, V ) ## is the product of the individual dimensions. Sorry if this is obvious and I misread what you meant.
 
He never mentioned ##L^2(\mathbb R^n,\mathbb R^n)##. The question was about ##L(\mathbb R^n,\mathbb R^n)## (linear operators on ##\mathbb R^n##) and ##L^2(\mathbb R^n)## (bilinear maps from ##\mathbb R^n\times\mathbb R^n## into ##\mathbb R##).
 
OK. Is ##L^2(\mathbb R^n)## supposed to be maps into ##\mathbb R##, or into any vector space ( as in the case of matrix multiplication)?
 
L^2(\mathbb{R}^n) maps into the reals. I see the mistake in my "proof" that led to this confusion. It all makes sense now.
 
Just curious, what was it?
 
  • #10
WWGD said:
Just curious, what was it?

***Sorry about the crappy LaTex ***

Well the correct proof would be to fix the usual basis for \mathbb{R}^n. Let T \in L(\mathbb{R}^n,\mathbb{R}^n). Let \phi_{j,i} (e_i) = e_j but be 0 on all other basis vectors. Note that 1 \le j \le n, 1 \le i \le n. We show that the \phi_{j,i} span L(\mathbb{R}^n,\mathbb{R}^n). We know that T(e_j) = \sum_{j = 1}^n d_{j,i} e_j for some scalars d_{j,i} \in \mathbb{R}.

So T(e_1) = d_{1,1} \phi_{1,1} (e_1) + d_{2,1} \phi_{2,1} (e_1) + d_{3,1} \phi_{3,1} (e_1) + \cdots + d_{n,1} \phi_{n,i} (e_n)

We do this for each e_i. Clearly then, T is a linear combination of the \phi_{j,i}. It is also clear that there are n^2 many of the \phi_{j,i} and Linear independence is obvious.

The mistake I was making before was only allowing one index, so I ended up with only n many of the \phi
 
  • #11
Thanks; your Latex is fine.
 

Similar threads

Back
Top