Help: All subspaces of 2x2 diagonal matrices

kostoglotov
Messages
231
Reaction score
6
The exercise is: (b) describe all the subspaces of D, the space of all 2x2 diagonal matrices.

I just would have said I and Z initially, since you can't do much more to simplify a diagonal matrix.

The answer given is here, relevant answer is (b):

DKwt8cN.png


Imgur link: http://i.imgur.com/DKwt8cN.png

I cannot understand how D is R^4, let alone the rest of the answer. I kind of get why there'd be orthogonal subspaces in that case, since it's diagonal...but that's just grasping at straws.

I can see how we might take the columns of D and form linear combinations from them, but those column vectors are in R^2
 
Physics news on Phys.org
Maybe they are using the identification of a matrix ##(a_{ij})## with the ## i \times j##-ple (i.e., a point in ##\mathbb R^{i \times j} ##) given by : ## (a_{11}, a_{12},..., a_{ij}) ## , i.e., you use a double-alphabet ordering to do the identification. ## 2 \times 2 ## diagonal matrices are then identified with the set ##(a, 0,0,b) :a, b \in \mathbb R ##.
 
kostoglotov said:
I cannot understand how D is R^4,

In short, the set of 2x2's with real entries is just a silly way of writing \mathbb{R}^4.

A 2x2 matrix is of course going to be a set of 4 independent real numbers. Independent in the sense that the elements do not constrain one another. We add component-wise, and we perform scalar multiplication component-wise. Really, this is exactly how we work with row/column vectors. We've just written them down differently. Thinking of them as actual matrices is misleading, I think. The question's solution then follows by describing (very generally) that the subspaces are just (any!) spaces of dimension 0,1,2 and 3. 1D subspaces always have to pass through the zero vector, that's nothing special about this case.
 
FireGarden said:
In short, the set of 2x2's with real entries is just a silly way of writing \mathbb{R}^4.

A 2x2 matrix is of course going to be a set of 4 independent real numbers. Independent in the sense that the elements do not constrain one another. We add component-wise, and we perform scalar multiplication component-wise. Really, this is exactly how we work with row/column vectors. We've just written them down differently. Thinking of them as actual matrices is misleading, I think. The question's solution then follows by describing (very generally) that the subspaces are just (any!) spaces of dimension 0,1,2 and 3. 1D subspaces always have to pass through the zero vector, that's nothing special about this case.

But the diagonal matrices are already a subspace of ##\mathbb R^4 ## whose 2nd, 3rd entries are both ## 0 ##. That makes it into a 2-dimensional subspace of ##\mathbb R^4 ##.
 
WWGD said:
But the diagonal matrices are already a subspace of ##\mathbb R^4 ## whose 2nd, 3rd entries are both ## 0 ##. That makes it into a 2-dimensional subspace of ##\mathbb R^4 ##.

Oh, I didn't read the requirement for the matrices to be diagonal. We still get some of the 1 dimensional subspaces and the zero subspace anyway - the second and third entries must be zero to be diagonal, but we could just as well fix the first and/or fourth to be zero, and we will still have a diagonal matrix. I'm not sure why the answer claims there are 3 dimensional subspaces in this case though..
 
I asked online questions about Proposition 2.1.1: The answer I got is the following: I have some questions about the answer I got. When the person answering says: ##1.## Is the map ##\mathfrak{q}\mapsto \mathfrak{q} A _\mathfrak{p}## from ##A\setminus \mathfrak{p}\to A_\mathfrak{p}##? But I don't understand what the author meant for the rest of the sentence in mathematical notation: ##2.## In the next statement where the author says: How is ##A\to...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
Back
Top