Divide by Matrix: Is it Possible?

  • Thread starter Thread starter spamiam
  • Start date Start date
  • Tags Tags
    Matrices
spamiam
Messages
360
Reaction score
1
In post #7 of https://www.physicsforums.com/showthread.php?t=532666" thread, the OP asked whether one could meaningfully divide by a matrix. Certainly this is possible for invertible matrices, but I'm wondering if it's possible to define something similar even for singular matrices.

For instance, suppose I have a singular matrix A. If B = \lambda A, it seems natural to define \frac{B}{A} := \lambda I. However, I don't think this operation is well-defined. Since A is singular, left multiplication by A has a nontrivial kernel, so there is some nonzero vector v such that Av = 0. Letting V be the matrix with columns v, then B = A \cdot \lambda I = A \cdot (\lambda I + V), so \frac{B}{A} could just as well be equal to \lambda I + V.

My question is, is there a way to make this division well-defined? Would working over a ring with specific properties help?
 
Last edited by a moderator:
Physics news on Phys.org
Firstly, if you can't do it over a field, you probably can't do it over a ring. The field is the best case for scalars.

What you are looking for is what we call http://en.wikipedia.org/wiki/Von_Neumann_regular_ring" which has uniqueness.

There is also the http://en.wikipedia.org/wiki/Drazin_inverse" . The Drazin inverse has a further generalization called the g-Drazin inverse aka Drazin-Koliha inverse, but this is more C*-algebra stuff, rather than matrix theory proper.
 
Last edited by a moderator:
Thanks for the reply--the links were very interesting.

I guess what makes \lambda I the "natural" choice for \frac{B}{A} if B = \lambda A is that \lambda I is a scalar matrix and lies in the center of the ring. I guess these various pseudoinverses all have some nice properties and I'll have to work out some examples.
 
##\textbf{Exercise 10}:## I came across the following solution online: Questions: 1. When the author states in "that ring (not sure if he is referring to ##R## or ##R/\mathfrak{p}##, but I am guessing the later) ##x_n x_{n+1}=0## for all odd $n$ and ##x_{n+1}## is invertible, so that ##x_n=0##" 2. How does ##x_nx_{n+1}=0## implies that ##x_{n+1}## is invertible and ##x_n=0##. I mean if the quotient ring ##R/\mathfrak{p}## is an integral domain, and ##x_{n+1}## is invertible then...
The following are taken from the two sources, 1) from this online page and the book An Introduction to Module Theory by: Ibrahim Assem, Flavio U. Coelho. In the Abelian Categories chapter in the module theory text on page 157, right after presenting IV.2.21 Definition, the authors states "Image and coimage may or may not exist, but if they do, then they are unique up to isomorphism (because so are kernels and cokernels). Also in the reference url page above, the authors present two...
When decomposing a representation ##\rho## of a finite group ##G## into irreducible representations, we can find the number of times the representation contains a particular irrep ##\rho_0## through the character inner product $$ \langle \chi, \chi_0\rangle = \frac{1}{|G|} \sum_{g\in G} \chi(g) \chi_0(g)^*$$ where ##\chi## and ##\chi_0## are the characters of ##\rho## and ##\rho_0##, respectively. Since all group elements in the same conjugacy class have the same characters, this may be...
Back
Top