# Matrix A^2 = 0, then A + I is nonsingular. Proof?

1. Matrix A^2 = 0, then A + I is nonsingular. Prove this or show a counter-example

## Homework Equations

I suppose you must know Gauss-Jordan and matrix multiplication.

## The Attempt at a Solution

I first thought this was a false statement, so I tried to provide a counter-example. I tried with 2x2 matricies, both imaginary and real valued. For example I let A = [[1,1],[-1,-1]], which of course is A^2 = 0; however, A + I is nonsingular. With imaginary numbers I get the matrix of the familiar form [[a,-b], [b, a]], which is nonsingular (every matrix of that form is invertible, real valued or not).
So my thought now is that it is nonsingular. My issue is not so much the process of proving it, but what the insight might be as to why it would be nonsingular. How does the fact that A^2 = 0 help? What does it change about A's form in order to guarantee that A + I is nonsingular? Or does it not guarantee this and I just somehow missed a good counter-example.

-Ian

## Answers and Replies

I can't use determinants yet. That is the next chapter in the book and the professor does not want us using material not yet gone over in our proofs. :( Is there a way to think about it without resorting to determinants (I already figured out how to trivially prove most of these questions using determinants, but doing so without using determinants is harder).
I need to see what is actually revealed about A because it has the property A^2 = 0.

lanedance
Homework Helper
have think about what you could multiply (A + I) by to get the identity, thus showing it has an inverse that exists

Last edited:
Oh yea, I tried that earlier. You mean A*A + A*I = A*C -> 0 + A*I = A*C -> A*I = A*C
The only way to get the identity from that is if I could invert A, and show that I = C. But nothing says that A is nonsingular. So I can't just invert it. In other words, I could have the inverse (A^-1)A, which would be (A^-1)A*A + I*I = C -> I = C, but this relies on A being nonsingular. I wasn't able to think of another way to get rid of A unless A is nonsingular.

Hurkyl
Staff Emeritus
Science Advisor
Gold Member
How does the fact that A^2 = 0 help? What does it change about A's form in order to guarantee that A + I is nonsingular?
While matrices aren't scalars, they do share many resemblances.

In some sense, A² being zero means that A is zero-like. An infinitessimal, if you will. This means A+I is one-like; in some sense it's "close" to I.

You can use your knowledge about numbers to suggest how to invert I+A....

Incidentally, there's a number system called the "dual numbers". They are defined very similarly to the complex numbers -- however the dual unit e satisfies the equation e²=0. (Rather than the equation i²=-1 satisfied by the complex unit) Playing with the dual numbers might give you even more algebraic insight....

lanedance
Homework Helper
no as you mention, A^2 = 0, implies both A & A^2 are singular, this will be more obvious when you learn about determinants, but for now, means neithe A or A^2 have inverses

start with (A+I)^2 = (A+I)(A+I), multiply it out and see what you get, then see how you could change it to make (X)(A+I) = I

if you can show that, you have shown what the inverse is & so tht (A+I) is non-singular

Ah, that is an interesting insight. So A + I is close to 1 as I is the identity, 1 (as I^n = I). But what does being close to the identity have to do with being invertible? I can't make the connection, is there some property revealed by A+I being "close" to 1? What does that even mean, exactly?

lane, I've been trying (A+I)^2 just now. I'll tell you the results in a moment.

Even if I get the answer that way, I am very interested in your insight hurkyl. I would love if you would go a bit more in depth so I can understand what you mean.

Hurkyl
Staff Emeritus
Science Advisor
Gold Member
What does that even mean, exactly?
I'm not sure what it means exactly. But I know what it means for real numbers. What can you say about real numbers? Can you make any of it work for matrices?

I'll have to think about that more. I can't see the connection right now. It's 5 am where I am so I need to sleep soon.
I figured out that (-A + I) is an inverse of A + I using lane's hint. Thanks lane.

lanedance
Homework Helper
the thing it means is a number "like" zero does not have an inverse, compare with 1/0 which doesn't exist, but something more like one will have an inverse

lanedance
Homework Helper
I'll have to think about that more. I can't see the connection right now. It's 5 am where I am so I need to sleep soon.
I figured out that (-A + I) is an inverse of A + I using lane's hint. Thanks lane.
no worries, sounds right

Hurkyl
Staff Emeritus
Science Advisor
Gold Member
If x is a number close to zero, then one can use calculus to rewrite the inverse of 1+x, e.g. as
1 - x + x2 - x3 + ...​
(which could also be figured out without calculus, of course)

Hurkyl
Staff Emeritus
Science Advisor
Gold Member
Now, using calculus to suggest the form of the inverse is something you might do without knowing anything sophisticated about matrix calculus. You look at real numbers to get an idea, then you see if you can make that idea work with matrices.

However, just as an interesting FYI, calculus does still work with matrices. It's a little more subtle and not always as nice, but it still works.

Most matrices are diagonalizable. For any complex function f, you can define its action on a diagonalizable matrix by applying f to the eigenvalues.

If f is a polynomial function, it's easy to check that the above definition is the same as doing ordinary arithmetic.

Nilpotent matrices act like infinitessimals. If, for example, NN=0 and f is differentiable, then f(N) is given by a differential "approximation":
f(N) := f(0) I + f'(0) N​
Similarly, if NNN=0, then we would instead have
f(N) := f(0) I + f'(0) N + (1/2) f''(0) N2

Again, if f is a polynomial, this is the same as ordinary arithmetic. I put "approximation" in quotes because it's not an approximation at all -- it's an exact identity.

The Jordan normal form tells us that any matrix can be written as
A = P-1 (D + N) P​
where DN=ND. (If A is diagonalizable, then N=0 and this is the ordinary diagonal form) If NN=0, we again see the differential "approximation":
f(A) = P-1 (f(D) + f'(D) N) P​
or if NNN=0, we get
f(A) = P-1 (f(D) + f'(D) N + (1/2) f''(D) N2) P​

This looks just like a Taylor series -- but because N is nilpotent, all but finitely many terms are zero.

The above is formulated in a very algebraic fashion. One can develop matrix calculus more along the lines of ordinary calculus, using limits, sequences, convergence, and all that sort of stuff, which I think is the more common approach.

Last edited:
Very nicely explained Hurkyl. I like that perspective.

A² = 0 => I - A is nonsingular

Proof:

Let A² = 0, so we have

(I - A)(I + A) = I + A² = I

By the uniqueness of the inverse, we have that I - A is nonsingular.

Seems like a basic matrix algebra problem. Neumann Series is just overkill!

lanedance
Homework Helper
this thread is 1.5 years old and not worth re-opening