Can Matrix Dimensions Vary Within the Same Vector Space Transformation?

AI Thread Summary
Matrix dimensions can vary within the same vector space transformation, particularly when considering linear operators on finite-dimensional inner product spaces. In the case of V = M_{n x n}(\mathbb{R}), the matrix representation A of a linear operator T is indeed an (n² x n²)-matrix, as T maps from V to itself. The confusion arises when interpreting the dimensions of the matrices involved; the operator T operates within the same vector space, maintaining consistent dimensions. The equality [I_V]_{\beta} = I_N holds true since both sides represent (n² x n²)-matrices. Understanding the distinction between linear operators and transformations clarifies the dimensionality issue.
AKG
Science Advisor
Homework Helper
Messages
2,559
Reaction score
4
If I have a finite dimensional inner product space V = M_{n \times n}(\mathbb{R}), then one basis of V is the set of n² (n x n)-matrices, \beta = \{E_1, \dots , E_{n^2}\} where E_i has a 1 in the i^{th} position, and zeroes elsewhere (and by i^{th} position, I mean that the first position is the top-left, the second position is just to the right of the top-left, and the last position is the bottom-right). Since these matrices are linearly independent and span V, they certainly form a basis, and since there are n² of them, dim(V) = n². Therefore, if I have some linear operator T on V, then A = [T]_{\beta} is an (n² x n²)-matrix, right? However, if v is some element of V, then T(v) = Av, but Av is not even possible, since it involves multiplying two square matrices of different dimension. Now, if I had made a mistake earlier, then maybe A is supposed to be an (n x n)-matrix. But that doesn't seem right.

My textbook proves:

If V is an N-dimensional vector space with an ordered basis \beta, then [I_V]_{\beta} = I_N, where I_V is the identity operator on V. Now, in our case, N = n², but if I was wrong before, and in the previous example, A should have been an (n x n)-matrix, then the equality above essentially states that an (n x n)-matrix is equal to an (n² x n²)-matrix. Where have I (or my book) made a mistake?
 
Physics news on Phys.org
>> However, if v is some element of V, then T(v) = Av, but Av is not even possible, since
>> it involves multiplying two square matrices of different dimension.

I´m not completely sure if I got your question but maybe I´m guessing right where your problems lie:

Assume a linear operator O over the R^n. It can be written in the form of O(a) = Ma for any vector a. M is an n x n matrix here. Certainly M and a don´t have the same "dimension" (dunno the proper english term; probably "level"). And you probably wouldn´t feel that this is not going to work, because you know how to interpret the equation.
In tensorial notation using the components of the vector a above is written like this:
T(a^\nu) = \sum_{\nu=1} ^n M^\mu _{\, \nu} a^\nu = b^\mu
The last "=" was put into show that the result b is a vector of R^n again.


Rewriting your "T(v) = Av" in tensorial terms it would be:
T(v^{\mu \nu}) = \sum_{\mu=1} ^n \sum _{\nu=1} ^n A^{\alpha \beta}_{\, \, \, \mu \nu} v^{\mu \nu} = b^{\alpha \beta}
So the equation is defined and the result is an element of V.

Sidenotes:
- Av = A*v is never possible unless you define what it´s supposed to be. Tensorial notation like above does this.
- I didn´t understand what your textbook sais because I neither know the notation nor do I know what I_N is. So it´s well possible that I completely missed your question.
 
Last edited:


In this case, the mistake lies in assuming that the linear operator T maps from V to V, when in fact it maps from V to itself. This means that the matrix A is not an (n x n)-matrix, but rather an (n² x n²)-matrix, as stated in the original content. The confusion may arise from thinking of T as a transformation between different vector spaces, but in this case, it is a transformation within the same vector space V. Therefore, the dimensions of the matrices involved will also be the same.

To understand this better, consider the example of a linear transformation T: \mathbb{R}^2 \rightarrow \mathbb{R}^2 given by T(x,y) = (2x,3y). Here, the matrix representation of T with respect to the standard basis is a 2x2 matrix. However, if we consider the same transformation T as a mapping from \mathbb{R}^4 to \mathbb{R}^4, then the matrix representation will be a 4x4 matrix. The dimensions of the matrices change depending on the vector space they are representing the transformation in.

In the case of a linear operator T on V, the matrix representation A = [T]_{\beta} is an (n² x n²)-matrix because T maps from V to itself, and the basis \beta is of n² (n x n)-matrices. This is consistent with the dimension of V being n². The equality [I_V]_{\beta} = I_N also makes sense, as both sides are (n² x n²)-matrices.

In conclusion, there is no mistake in the original content or in the textbook. The confusion may arise from not differentiating between a linear operator and a linear transformation, and the dimensions of matrices representing them.
 
I multiplied the values first without the error limit. Got 19.38. rounded it off to 2 significant figures since the given data has 2 significant figures. So = 19. For error I used the above formula. It comes out about 1.48. Now my question is. Should I write the answer as 19±1.5 (rounding 1.48 to 2 significant figures) OR should I write it as 19±1. So in short, should the error have same number of significant figures as the mean value or should it have the same number of decimal places as...
Thread 'A cylinder connected to a hanging mass'
Let's declare that for the cylinder, mass = M = 10 kg Radius = R = 4 m For the wall and the floor, Friction coeff = ##\mu## = 0.5 For the hanging mass, mass = m = 11 kg First, we divide the force according to their respective plane (x and y thing, correct me if I'm wrong) and according to which, cylinder or the hanging mass, they're working on. Force on the hanging mass $$mg - T = ma$$ Force(Cylinder) on y $$N_f + f_w - Mg = 0$$ Force(Cylinder) on x $$T + f_f - N_w = Ma$$ There's also...
Back
Top