Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Multiplicative Identity under Matrix Multiplication

  1. Aug 30, 2010 #1
    I've been asked by my professor to identify a group of singular matrices. At first, I did not think this was possible, since a singular matrix is non-invertible by definition, yet to prove a groups existence, every such singular matrix must have an inverse.

    It has been brought to my attention, however, that a multiplicative identity need not be the typical diagonal "identity matrix" but can instead be any matrix for which AI=IA=A.
    For example, take the matrix
    [3 3]
    [0 0].
    Since the determinant for this matrix is 0, it satisfies the singular aspect. However, my classmate is claiming that the multiplicative inverse for this matrix is
    [1/3 1/3]
    [0 0], which would indeed satisfy the above requirement of AI=IA=A.

    Is this possible? Please help, I'm so confused!!
     
  2. jcsd
  3. Aug 30, 2010 #2

    Hurkyl

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    A very simple example of a group of matrices under matrix multiplication is the set of all nxn matrices whose entries are all zero except for the upper-left entry, which is nonzero.
     
  4. Aug 30, 2010 #3
    I need to find an example of a group of singular, non-diagonal matrices under matrix multiplication. Does such a thing exist?
     
  5. Aug 30, 2010 #4

    Office_Shredder

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Why is the multiplicative inverse allegedly acting like the multiplicative identity?

    I think I see the group that your classmate was pointing towards. Multiplying your matrices
    3 3
    0 0

    and

    1/3 1/3
    0 0

    together we get a matrix

    1 1
    0 0

    These matrices have something in common. Their bottom row is zero.
     
  6. Aug 30, 2010 #5
    You can get your friends and Hurkyl's examples to match by changing the basis.
    eg
    [tex]P = \begin{pmatrix}1&1\\0&-1\end{pmatrix}[/tex]
    [tex]A = \begin{pmatrix}a&0\\0&0\end{pmatrix}[/tex]
    [tex]P^{-1} A P = \begin{pmatrix}a&a\\0&0\end{pmatrix}[/tex]

    I guess that nearly all examples you find will reduce like this.
    Basically almost every group can be represented by invertible matrices.
    The representation we have above is essentially the direct sum of an invertible rep and the group of one element
    [tex] 0 = id [/tex]
     
  7. Aug 31, 2010 #6

    lavinia

    User Avatar
    Science Advisor

    take any group of n-1 x n-1 matrices then extend them to dimension n by multiplying the last basis vector by zero.
     
  8. Aug 31, 2010 #7
    Conjecture: Given a group G of matrices acting on a vector space V, there is a subspace U of V that is stable under G (i.e. g(U) is contained in U for each g in G), and restricting the elements of G to U gives an injective homomorphism from G to GL(U).
     
  9. Aug 31, 2010 #8

    Office_Shredder

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    I don't know about the injective part but the invertible part is easy

    Let U be the smallest vector subspace which is stable under action by G... this exists and is unique since if W,T are stable under G, G takes [tex]W\cap T[/tex] to both W and to T so it takes [tex]W\cap T[/tex] to [tex]W\cap T[/tex]. Then if the matrices are invertible on a stable subspace they must also be invertible on the restriction to this subspace so it suffices to only look at the smallest stable subspace. We will call this U

    Claim: the identity element of G, call it I, acts as the identity on U. If we prove this, then we have proven that all the other elements of G are invertible when restricted to U. Suppose not. We know that I2=I so I is a projection onto a subspace of U, call it W. Claim: all elements A of G have A(U) is a subset of W (which means that U was not the smallest stable subspace since W is stable in particular). We know that IA=A and Au=IAu=Iv for some v in U. Iv is in fact in W so Au is in W.

    Hence I acts as the identity on U and the group is in fact invertible linear transformations on U.


    Now let U be the largest subspace such that G is invertible linear transformations. If T and W are subspaces in which G acts invertibly (is that the right word?) on them, then G acts invertibly on T+W. Again we only need to confirm that the identity acts as the identity on T+W, but I(+w)=It+Iw=t+w. So U exists and is unique.

    If the matrices all act differently, it will be on this subspace but I don't have a proof or counterexample to this
     
  10. Aug 31, 2010 #9
    My point about the injective homomorphism was that G could be regarded as a subgroup of GL(U); the goal was to characterize all groups of matrices on V.

    If you chose U to be the smallest stable subspace, then U would clearly be 0, which is totally uninteresting. (In that case, the homomorphism G -> GL(U) would be trivial, since GL(U) is.)
     
  11. Aug 31, 2010 #10

    Office_Shredder

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    Right. That just shortens the proof a little bit. The idea is that you only pick U small to show that there exists a subspace in which every matrix acts invertibly on; then you switch gears and look at the largest such subspace. That most likely won't be 0. You can't say look at the largest such subspace without proving that subspaces on which G acts invertibly on exist though. I was just typing on the fly so it was a pretty ugly way to describe things

    To shorten things up considerably:

    Let U=I(V). Since I2=I, I truly acts as the identity on U, which means that every element A of G when restricted to U acts as an invertible transformation (with inverse being A-1 in G restricted to U). Then what we need to do is show that if A and B are in G and A,B act identically on U, they act identically off of U as well.
     
  12. Aug 31, 2010 #11
    I have a proof. Let e be the identity of G, and let U = e(V) be the image of e.

    U is stable under G, since for g in G and u in U, we have g(u) = e(g(u)) is in U. And in fact, e acts as the identity on U, since for u in U, we have u = e(v) for some v in V, and e(u) = e(e(v)) = e(v) = u.

    Now for g in G, let ρ(g) = g|U denote the restriction of g to U; since U is stable under g, we regard ρ(g) as a linear map from U to U. It's clear that for g and g' in G, we have ρ(gg') = ρ(g)ρ(g'). The claim is that ρ(g) is in GL(V). Indeed, if g-1 denotes the inverse of g in G, then ρ(g)ρ(g-1) = ρ(gg-1) = ρ(e) is the identity of GL(U), and likewise for ρ(g-1)ρ(g). Thus ρ is a group homomorphism from G to GL(U).

    Finally, we must show that ρ is injective. So if g is in ker(ρ), then ρ(g) = ρ(e) so g acts as the identity on U. But then clearly g = e, since G is a group.

    edit: Ahhh, you just posted before I did. But I showed ρ is injective.

    The conclusion: If G is a group of operators acting on a space V, then G is isomorphic to a subgroup of GL(U) for some subspace U of V.
     
  13. Aug 31, 2010 #12

    Office_Shredder

    User Avatar
    Staff Emeritus
    Science Advisor
    Gold Member

    g acts as the identity on U, but g could do something outside of U that's different than e. I don't think it can in finite dimensional cases from trying to construct an example but maybe some weird infinite dimensional situation can screw you up
     
  14. Aug 31, 2010 #13
    But you see: g = ge = e. If v is in V, then e(v) is in U, so g(e(v)) = e(v).
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook




Similar Discussions: Multiplicative Identity under Matrix Multiplication
  1. Multiplication of matrix (Replies: 10)

  2. Matrix multiplication (Replies: 1)

  3. Matrix multiplication? (Replies: 5)

  4. Matrix multiplication (Replies: 1)

  5. Matrix multiplication (Replies: 14)

Loading...