Find the basis of a vector subspace of R^2,2

SMC
Messages
14
Reaction score
0

Homework Statement


i know how to find the basis of a subspace of R2 or R3 but I can't figure out how to find the basis of a subspace of something like R2,2.

I even have an example in my book which i managed to follow nearly till the end but not quite...
Given matrix:

A=
6 -9
4 -4

show that the subset F={X (of R2,2) | AX=XA } is a vector subspace of R2,2 and find its basisX =
x1 x1
x3 x4

Homework Equations

The Attempt at a Solution



I followed the example through by multiplying the matrices in the eq AX-XA=0 and found the solution for the associated system of equations which after reducing it is:

4*x1 - 12*x3 - 4*x4 = 0
9*x1 +12*x2 - 9*x4 = 0

so:

x1 = 3*x3 + x4
x2 = -9/4*x3
x3 = x3
x4 = x4

then the example just says:
therefore the dimension of subset F is 2 and the basis are the 2 matrices:

12 -9
4 0

and

0 1
1 0

I don't know where these numbers come from please help.

I looked through the book and it only shows how to find basis for Rn but not Rn,n
 
Physics news on Phys.org
I assume you mean by ##\mathbb{R}^{n,m}## the vector space of ##(n \times m)-##matrices, which is an unfortunate abbreviation on a physics website. Better would be to call it ##\mathbb{M}_{n,m}(\mathbb{R})##.

As to the vector spaces. Why should ##\mathbb{M}_{2,2}(\mathbb{R})## be any different than ##\mathbb{R}^{2\cdot 2} = \mathbb{R}^4## except of the order the components are arranged?

You need to set up a system of linear equations and see how many degrees of freedom you have. The arrangement is only cosmetic (in this case).
 
fresh_42 said:
I assume you mean by ##\mathbb{R}^{n,m}## the vector space of ##(n \times m)-##matrices, which is an unfortunate abbreviation on a physics website. Better would be to call it ##\mathbb{M}_{n,m}(\mathbb{R})##.

As to the vector spaces. Why should ##\mathbb{M}_{2,2}(\mathbb{R})## be any different than ##\mathbb{R}^{2\cdot 2} = \mathbb{R}^4## except of the order the components are arranged?

You need to set up a system of linear equations and see how many degrees of freedom you have. The arrangement is only cosmetic (in this case).

in the R4 case you just take the rows that remain in the matrix after the reduction and those are the vectors that make up the basis right?

but in the R2,2 case the basis is made up of matrices instead of regular vectors so you go through the same procedure but then how do you know what matrix makes up the basis?
 
SMC said:
in the R4 case you just take the rows that remain in the matrix after the reduction and those are the vectors that make up the basis right?

but in the R2,2 case the basis is made up of matrices instead of regular vectors so you go through the same procedure but then how do you know what matrix makes up the basis?
If you want to apply the usual pattern, you may write them as well as ##(x_{11},x_{12},x_{21},x_{22})##. Only matrix multiplication might get a bit more difficult. But you can always switch from one notation to the other and back.

On the other hand, in order to prove that ##F## is a subspace, you don't need a representation in coordinates. Simply check the properties a subspace has to fulfill. It can be done by using the properties of matrix addition and scalar multiplication without any numbers. The numbers are needed to find the basis. You've already written ##(x_{11},x_{12},x_{21},x_{22}) = (x_1,x_2,x_3,x_4)## and the corresponding equations are
$$
\begin{bmatrix}6 & -9 \\ 4 & -4 \end{bmatrix} \cdot \begin{bmatrix}x _1 & x_2 \\ x_3 & x_4 \end{bmatrix} = \begin{bmatrix}x _1 & x_2 \\ x_3 & x_4 \end{bmatrix} \cdot \begin{bmatrix}6 & -9 \\ 4 & -4 \end{bmatrix}
$$
which gives you ##4## linear equations in ##4## unknowns by comparison of each position in the resulting matrices. This is the scenario you already know. Solve the system and get (I suppose, since I haven't done it) two free variables and two dependent variables.
 
fresh_42 said:
If you want to apply the usual pattern, you may write them as well as ##(x_{11},x_{12},x_{21},x_{22})##. Only matrix multiplication might get a bit more difficult. But you can always switch from one notation to the other and back.

On the other hand, in order to prove that ##F## is a subspace, you don't need a representation in coordinates. Simply check the properties a subspace has to fulfill. It can be done by using the properties of matrix addition and scalar multiplication without any numbers. The numbers are needed to find the basis. You've already written ##(x_{11},x_{12},x_{21},x_{22}) = (x_1,x_2,x_3,x_4)## and the corresponding equations are
$$
\begin{bmatrix}6 & -9 \\ 4 & -4 \end{bmatrix} \cdot \begin{bmatrix}x _1 & x_2 \\ x_3 & x_4 \end{bmatrix} = \begin{bmatrix}x _1 & x_2 \\ x_3 & x_4 \end{bmatrix} \cdot \begin{bmatrix}6 & -9 \\ 4 & -4 \end{bmatrix}
$$
which gives you ##4## linear equations in ##4## unknowns by comparison of each position in the resulting matrices. This is the scenario you already know. Solve the system and get (I suppose, since I haven't done it) two free variables and two dependent variables.

yes but I already did that, I solved the system as you can see above I found x1 and x2 in terms of x3 and x4 but then the solution says that the matrices that make up the basis are

\begin{bmatrix}12 & -9 \\ 4 & 0 \end{bmatrix} and \begin{bmatrix}1 & 0 \\ 0 & 1 \end{bmatrix}

I just don't know how I'm supposed to come up with those matrices
 
I think that ##\begin{bmatrix}12 & -9 \\ 4 & 0\end{bmatrix} \notin F##, because the entry ##(2,1)##, i.e. in the second row, first column below left doesn't work. Anyway, basically you have expressed ##x_1## and ##x_2## by the other two variables ##\{x_3,x_4\}## which you are free to choose. So any basis of the two dimensional subspace, spanned by all the possible choices for ##\{ x_3,x_4 \}## will do. Usually one chooses the easiest ones: ##\begin{bmatrix}x_3 \\ x_4 \end{bmatrix} \in \left\{ \begin{bmatrix} 1 \\ 0 \end{bmatrix},\begin{bmatrix}0 \\ 1 \end{bmatrix} \right\} ## and compute ##\begin{bmatrix}x_1 , x_2 , 1 , 0 \end{bmatrix}## and ##\begin{bmatrix}x_1 , x_2 , 0, 1 \end{bmatrix}## for the entire basis vectors, now rearranged again as matrices. But check your answer again: ##A\cdot X = X \cdot A## has still to be true for the solutions you've found.
 
  • Like
Likes SMC
For the given matrix, Mathematica says the solution is
\begin{align*}
x_1 &= \frac 52 x_3 + x_4 \\
x_2 &= -\frac 94 x_3 \\
x_3 &= x_3 \\
x_4 &= x_4
\end{align*}
As fresh_42 said, you can choose any values for ##x_3## and ##x_4##. It looks like they simply chose ##x_3=4## so that ##x_2## would be an integer instead of a fraction.
 
  • Like
Likes SMC
ok, I looked over the posts after having had time to clear my mind and I'm pretty sure I get it now. thank you very much for your help!
 
SMC said:
in the R4 case you just take the rows that remain in the matrix after the reduction and those are the vectors that make up the basis right?

but in the R2,2 case the basis is made up of matrices instead of regular vectors so you go through the same procedure but then how do you know what matrix makes up the basis?
Just stretch out the four elements of your matrices into 4-dimensional vectors. As long as you are not performing matrix multiplication, it does not matter how you structure the four elements. (This is actually how the computer algebra package Maple finds the minimal polynomial of a matrix: it computes I, A, A^2, A^3,..., stretches them out into vectors then finds the first A^k that is a linear combination of the linearly-independent I,A,A^2, ... , A^(k-1).)
 
  • #10
it seems worth pointing out that there is a standard mathematical operation here, called the vec operator.

partition ##\mathbf X## by columns:

##\mathbf X = \bigg[\begin{array}{c|c|c|c|c}
\mathbf x_1 & \mathbf x_2 &\cdots & \mathbf x_{n-1} & \mathbf x_n\end{array}\bigg]##

then use the vec() operator, getting you:

##vec\big(\mathbf X\big) = \begin{bmatrix}
\mathbf x_1 \\
\mathbf x_2\\
\vdots \\
\mathbf x_{n-1}\\
\mathbf x_n
\end{bmatrix}##

This notably can be used in solving the Sylvester Equation for ##\mathbf X##:

##\mathbf A \mathbf X + \mathbf X \mathbf B = \mathbf Y \leftarrow \rightarrow \big(\mathbf A \oplus \mathbf B^T\big)vec\big(\mathbf X\big) = vec\big(\mathbf Y\big)##

where ##\oplus## denotes the Kronecker sum

if we set ##\mathbf B := -\mathbf A## and ##\mathbf Y:= \mathbf 0##, then we recover the original question.- - - -
note: that Kronecker products are everywhere in quantum stuff, and as one might guess they are closely related to Kronecker sums.

To see an excellent overview of Kronecker products go to:

http://bookstore.siam.org/ot91/

and scroll down and click blue link that says "Sample Chapter
 
Last edited by a moderator:
  • Like
Likes BvU and Greg Bernhardt
Back
Top