Is A singular if A multiplied by a non-zero matrix equals zero?

  • Thread starter Thread starter mikee
  • Start date Start date
  • Tags Tags
    Matrix
mikee
Messages
30
Reaction score
0

Homework Statement

Let A be an nxn matrix. If A is row equivalent to a matrix B and there is a non-zero column matrix C such that BC=0, prove that A is singular



Homework Equations





The Attempt at a Solution

Im not quite sure but since B and A are row equivalent than there reduced echelon forms will be the same ? and therefore AC=0 and i was wondering if since A multipliyed by a non zero matrix equals zero does that mean that A in singular?
 
Physics news on Phys.org
AC=0 yes... try a proof by contradiction; assume that A is non-singular and hence invertible...what happens when you multiply both sides of AC=0 by the inverse of A?
 
Ok so if you Multiply both sides by Ainverse you would get AinverseAC=Ainverse0, which equals IC=0 which is C=0 and since C is not 0 this is a contradiction and therefore proves A is singular?
 
Technically, it only proves that the inverse of A does not exist, but there is a theorem that tells you any square matrix is singular iff it has no inverse, so assuming you are allowed to use that theorem, then you've shown A is singular.
 
What definition are you using for "singular"? Gabbagabbahey seems to be interpreting "singular" as meaning the matrix has determinant 0. I would tend to define "singular" as meaning "non-invertible" but, as gabbagabbahey says, they are equivalent.
 
The definition i learned was that singular means non invertable
 
Singular means non-invertible, and non-invertible implies that its determinant and the product of its eigenvalues is zero.
 
The definition that I learned for a singular matrix A is that A's reduced row echelon form is NOT the identity matrix.
 
There are many ways to calculate inverses. Your definition follows with the computation of the inverse of A by doing row operations to [A|I] until it becomes [I|A^-1] (assuming that A is invertible). Another way of calculating inverses is by dividing the cofactor matrix of A transpose by its determinant. Thus, if the determinant is zero, the inverse does not exist.
 
Last edited:
Back
Top