If ad = bc, then the (abcd) matrix has no inverse

  • Thread starter Thread starter E'lir Kramer
  • Start date Start date
  • Tags Tags
    Inverse Matrix
E'lir Kramer
Messages
73
Reaction score
0
This is problem 4.14 in Advanced Calculus of Several Variables. Two questions: Is my proof correct? And: is there a cleaner way to prove this?

Show that, if ad = bc, then the matrix \left [<br /> \begin{array}{cc}<br /> a &amp; b \\<br /> c &amp; d \\<br /> \end{array}<br /> \right] has no inverse.

My attempt:

Suppose there is an inverse such that

\left [<br /> \begin{array}{cc}<br /> a &amp; b \\<br /> c &amp; d \\<br /> \end{array}<br /> \right]<br /> <br /> \times<br /> <br /> \left [<br /> \begin{array}{cc}<br /> a_{i} &amp; b_{i} \\<br /> c_{i} &amp; d_{i} \\<br /> \end{array}<br /> \right]<br /> <br /> = I = <br /> <br /> \left [<br /> \begin{array}{cc}<br /> 1 &amp; 0 \\<br /> 0 &amp; 1 \\<br /> \end{array}<br /> \right]<br /> <br />

By the definition of matrix multiplication, we have

aa_{i} + bc_{i} = 1 \&gt;\&gt; (1) \\<br /> ab_{i} + bd_{i} = 0 \&gt;\&gt; (2) \\<br /> ca_{i} + dc_{i} = 0 \&gt;\&gt; (3) \\ <br /> cb_{i} + dd_{i} = 1 \&gt;\&gt; (4)<br />

Now, since aa_{i} + bc_{i} = 1, either a_{i} or c_{i} ≠ 0.

First let us suppose that a_{i} ≠ 0 and seek contradiction.

By our supposition and equations (1) and (3) above, we have
a = \frac{1-bc_{i}}{a_{i}} ≠ 0, also, c = \frac{-dc_{i}}{a_{i}}

To proceed further, we can also suppose that that c_{i} ≠ 0. Then we have

d = \frac{-ca_{i}}{c_{i}}

so

ad = \frac{-ca_{i}}{c_{i}} * \frac{1-bc_{i}}{a_{i}} \\<br /> = \frac{-ca_{i}}{c_{i}} + bc<br />

by substitution.

Since we have by hypothesis ad = bc,

bc = \frac{-ca_{i}}{c_{i}} + bc

so

0 = \frac{-ca_{i}}{c_{i}}.

By supposition, a_{i}, c_{i} ≠ 0, so, c = 0

But this is a contradiction, because then by (4),

dd_{i} = 1, meaning that d ≠ 0,

but we have from above that

d = \frac{-ca_{i}}{c_{i}} = 0.

Now if we suppose that a_{i} ≠ 0, c_{i} = 0, we are led to another contradiction.

By (3),

0 = ca_{i},

and by our first supposition that a_{i} ≠ 0, c = 0.

Since ad = bc, either a or d must then be 0.

If a = 0, we have by (1), and our supposition that c_{i} = 0, then

0 * a_{i} + 0 * b_{i} = 1, a contradiction.

If d = 0, we have by (4) and our supposition that c_{i} = 0, then

0 * b_{i} + 0 * d_{i} = 1, a contradiction.

Note that this proves that c ≠ 0 when c_{i} = 0. (We'll use this again in a second.)

Now we have proved that if a_{i} ≠ 0, we run into a contradiction.

The final case to consider is that a_{i} = 0. This is impossible. If a_{i} = 0, then c_{i} ≠ 0 by (1) . Then we have

b = \frac{1-aa_{i}}{c_{i}} ≠ 0 \&gt;\&gt; (1) \\<br /> d = \frac{-ca_{i}}{c_{i}} \&gt;\&gt; (3)<br />.

Again by hypothesis,

a * \frac{-ca_{i}}{c_{i}} = c * \frac{1-aa_{i}}{c_{i}} \\<br /> a * -ca_{i} = c * (1-aa_{i}) \\<br /> c = 0<br />

We've already learned from above that c ≠ 0, so we're done.
 
Last edited:
Physics news on Phys.org
Matrix B has inverse iff det(B) is different than 0.
det B = ad -bc. ad = bc => det(B) = 0. Thus B has no inverse.
 
I'll accept your proof, as long as you also prove that matrix B has an inverse iff det(B) ≠ 0 :)

Oh, and first, you'll have to define det() - we haven't covered it in the book. Edit: and by the way, I'm not intentionally playing dumb. I've never done linear algebra, and my first encounter with it is in this book. I really don't know what the det() function is.
 
Last edited:
Since you haven't learned about determinants yet, then it is not reasonable to use them, as you point out.

Your proof seems very long-winded. You are doing a proof by contradiction in which you assume that ad = bc, and the the inverse of A exists. You should get a contradiction, which will mean that no such inverse exists.

You could simplify your work by using, say e, f, g, and h as the elements of the supposed inverse, instead of ai, etc. When you row-reduce the matrix in the left half of your augmented matrix, you will at some point need to divide by ad - bc, which is zero by hypothesis.
 
My tragic ignorance of linear algebra also leaves me bereft of any concept of "row reduction", as well :). I will say that the difficulty of this proof has motivated me to acquire the tools of linear algebra, though.
 
Linear algebra is pretty important for multivariable calculus. You really should get an introductory book for it. There doesn't seem much point in trying to work out what is purely a linear algebra problem without knowing linear algebra.
 
Well, he's teaching me linear algebra right now, wouldn't you say? In the finest tradition of Bourbaki, he has yet to assign a problem that can't be solved without reference to external sources. After 60 harrowing, yet satisfying problems, I trust him to deliver me safely, as long as I do my part - and as long as I'm still getting traction on these end of chapter problems, I feel up to the challenge. Besides, I peeked ahead, and he's going to develop det() in ten pages.
 
Well, personally I think it would be ill-advised to learn linear algebra "on the fly" from a calculus book. However, for this particular problem, I suggest the easiest way to prove the statement without using any elementary linear algebra results is the following:

It is true in general (for linear and non-linear functions) that for a function to have an inverse, it must be one-to-one from its domain to its image. If your Socratically-minded textbook author hasn't established this yet, then it's a very convoluted book indeed. So, consider the function from ##R^2## to ##R^2## defined by left multiplying a column vector by the matrix you've given. (i.e. ##f(x,y) = (ax + by, cx + dy)##). Argue that the matrix being invertible is equivalent to the function f having an inverse. Then construct, using the fact that ##ad=bc##, two different column vectors that get mapped to the same vector by f. Of course, their existence means f is not 1-1 and thus not invertible.

There are infinitely many possible pairs of vectors you could use, though there are a few especially simple choices. Can you see what they are?
 
I'm not intentionally playing dumb. I've never done linear algebra, and my first encounter with it is in this book. I really don't know what the det() function is.

In the finest tradition of Bourbaki, he has yet to assign a problem that can't be solved without reference to external sources.

Is it just me that I find it odd that someone has heard of Bourbaki before the determinant?
 
  • #10
Thank you, LastOne. That was a great hint to a much better proof.

Considering the function f : \Re^{2} \to \Re^{2} such that f(x) = Mx where M = \left [<br /> \begin{array}{cc}<br /> a &amp; b \\<br /> c &amp; d \\<br /> \end{array}<br /> \right]<br /> <br />

We can write that f(x,y) = (ax + by, cx + dy).Finding f^{-1} is the same thing as finding M^{-1}.

Proof: <br /> f^{-1}(f(x)) = (f^{-1} \circ f)(x) = M_{f^{-1}}M_{f}x \\<br /> <br /> (f^{-1} \circ f)(x) = x \\<br /> <br /> M_{f^{-1}}M_{f} x = x \\<br /> <br /> M_{f^{-1}}M_{f} = I \\<br /> <br /> M_{f^{-1}} = M_{f}^{-1} \\<br />

If we prove f^{-1} doesn't exist, then we've proven that M^{-1} doesn't. All that is required to prove that an inverse for f doesn't exist is to find two different values in the domain that are mapped to the same value by f.

If we also have that ad = bc, there are such vectors.

f((d-b),(a-c)) = (ad - ab + ab - bc, cd - bc + ad - cd) = (ad - bc, ad - bc) = (0, 0) \\<br /> f((b-d), (c-a)) = (ab - ad + bc - ab, bc - cd + cd - ad) = (bc - ad, bc - ad) = (0, 0)<br />

So, e.g., for

M = \left [<br /> \begin{array}{cc}<br /> 1 &amp; 2 \\<br /> 3 &amp; 6 \\<br /> \end{array}<br /> \right], we could think of f(4, -2) and f(-4, 2).

In general any vectors k(d - b, a - c) and k(b - d, c - a) map to zero.
 
Last edited:
  • #11
rollingstein said:
Is it just me that I find it odd that someone has heard of Bourbaki before the determinant?

"Bourbaki" was quoted at the beginning of a chapter in Spivak, and I looked "him" up the other day. I've been planning my own course of self study, so that entails some amount of scholarship of the history of mathematics. I identify with the project that those men were working on: as I understand it, in the aftermath of WWI, their teachers were dead or fled, and they were essentially resolved to teach themselves mathematics with books and elbow grease.
 
Last edited:
  • #12
E'lir Kramer said:
Thank you, LastOne. That was a great hint to a much better proof.
Indeed. You're welcome. I had in mind the slightly simpler case of f(d,0) = f(0,c) but of course yours works perfectly well too.

Edit: Though your specific example is wrong. For starters, ##4 \not= 6##. I think the general case is probably fine, though I haven't worked it through.
 
Last edited:
  • #13
Doh. Fixed to a real example.
 
Back
Top