If ad = bc, then the (abcd) matrix has no inverse

This function is one-to-one if and only if its determinant is non-zero (this should be easy to prove, even without knowing what a determinant is!). So, your matrix has no inverse if and only if this function is not one-to-one, which is the case if and only if its determinant is zero. Therefore the determinant of your matrix is zero if and only if it has no inverse,which is what we wanted to prove.In summary, the problem states that a matrix has no inverse if ad = bc. The proof starts by assuming an inverse exists and then leads to a contradiction by showing that different elements
  • #1
E'lir Kramer
74
0
This is problem 4.14 in Advanced Calculus of Several Variables. Two questions: Is my proof correct? And: is there a cleaner way to prove this?

Show that, if [itex] ad = bc [/itex], then the matrix [itex]\left [
\begin{array}{cc}
a & b \\
c & d \\
\end{array}
\right][/itex] has no inverse.

My attempt:

Suppose there is an inverse such that

[itex]\left [
\begin{array}{cc}
a & b \\
c & d \\
\end{array}
\right]

\times

\left [
\begin{array}{cc}
a_{i} & b_{i} \\
c_{i} & d_{i} \\
\end{array}
\right]

= I =

\left [
\begin{array}{cc}
1 & 0 \\
0 & 1 \\
\end{array}
\right]

[/itex]

By the definition of matrix multiplication, we have

[itex]aa_{i} + bc_{i} = 1 \>\> (1) \\
ab_{i} + bd_{i} = 0 \>\> (2) \\
ca_{i} + dc_{i} = 0 \>\> (3) \\
cb_{i} + dd_{i} = 1 \>\> (4)
[/itex]

Now, since [itex] aa_{i} + bc_{i} = 1 [/itex], either [itex] a_{i} [/itex] or [itex]c_{i} ≠ 0 [/itex].

First let us suppose that [itex] a_{i} ≠ 0 [/itex] and seek contradiction.

By our supposition and equations (1) and (3) above, we have
[itex] a = \frac{1-bc_{i}}{a_{i}} ≠ 0 [/itex], also, [itex] c = \frac{-dc_{i}}{a_{i}} [/itex]

To proceed further, we can also suppose that that [itex]c_{i} ≠ 0 [/itex]. Then we have

[itex] d = \frac{-ca_{i}}{c_{i}} [/itex]

so

[itex] ad = \frac{-ca_{i}}{c_{i}} * \frac{1-bc_{i}}{a_{i}} \\
= \frac{-ca_{i}}{c_{i}} + bc
[/itex]

by substitution.

Since we have by hypothesis [itex] ad = bc [/itex],

[itex] bc = \frac{-ca_{i}}{c_{i}} + bc [/itex]

so

[itex] 0 = \frac{-ca_{i}}{c_{i}} [/itex].

By supposition, [itex]a_{i}, c_{i} ≠ 0 [/itex], so, [itex] c = 0 [/itex]

But this is a contradiction, because then by (4),

[itex]dd_{i} = 1[/itex], meaning that d ≠ 0,

but we have from above that

[itex] d = \frac{-ca_{i}}{c_{i}} = 0 [/itex].

Now if we suppose that [itex] a_{i} ≠ 0 [/itex], [itex]c_{i} = 0[/itex], we are led to another contradiction.

By (3),

[itex] 0 = ca_{i} [/itex],

and by our first supposition that [itex] a_{i} ≠ 0 [/itex], [itex] c = 0 [/itex].

Since [itex] ad = bc [/itex], either [itex]a[/itex] or [itex]d[/itex] must then be 0.

If [itex] a = 0[/itex], we have by (1), and our supposition that [itex]c_{i} = 0[/itex], then

[itex] 0 * a_{i} + 0 * b_{i} = 1 [/itex], a contradiction.

If [itex] d = 0 [/itex], we have by (4) and our supposition that [itex]c_{i} = 0[/itex], then

[itex] 0 * b_{i} + 0 * d_{i} = 1 [/itex], a contradiction.

Note that this proves that [itex] c ≠ 0 [/itex] when [itex]c_{i} = 0[/itex]. (We'll use this again in a second.)

Now we have proved that if [itex] a_{i} ≠ 0, [/itex] we run into a contradiction.

The final case to consider is that [itex] a_{i} = 0 [/itex]. This is impossible. If [itex] a_{i} = 0[/itex], then [itex] c_{i} ≠ 0 [/itex] by (1) . Then we have

[itex] b = \frac{1-aa_{i}}{c_{i}} ≠ 0 \>\> (1) \\
d = \frac{-ca_{i}}{c_{i}} \>\> (3)
[/itex].

Again by hypothesis,

[itex] a * \frac{-ca_{i}}{c_{i}} = c * \frac{1-aa_{i}}{c_{i}} \\
a * -ca_{i} = c * (1-aa_{i}) \\
c = 0
[/itex]

We've already learned from above that c ≠ 0, so we're done.
 
Last edited:
Physics news on Phys.org
  • #2
Matrix B has inverse iff det(B) is different than 0.
det B = ad -bc. ad = bc => det(B) = 0. Thus B has no inverse.
 
  • #3
I'll accept your proof, as long as you also prove that matrix B has an inverse iff det(B) ≠ 0 :)

Oh, and first, you'll have to define det() - we haven't covered it in the book. Edit: and by the way, I'm not intentionally playing dumb. I've never done linear algebra, and my first encounter with it is in this book. I really don't know what the det() function is.
 
Last edited:
  • #4
Since you haven't learned about determinants yet, then it is not reasonable to use them, as you point out.

Your proof seems very long-winded. You are doing a proof by contradiction in which you assume that ad = bc, and the the inverse of A exists. You should get a contradiction, which will mean that no such inverse exists.

You could simplify your work by using, say e, f, g, and h as the elements of the supposed inverse, instead of ai, etc. When you row-reduce the matrix in the left half of your augmented matrix, you will at some point need to divide by ad - bc, which is zero by hypothesis.
 
  • #5
My tragic ignorance of linear algebra also leaves me bereft of any concept of "row reduction", as well :). I will say that the difficulty of this proof has motivated me to acquire the tools of linear algebra, though.
 
  • #6
Linear algebra is pretty important for multivariable calculus. You really should get an introductory book for it. There doesn't seem much point in trying to work out what is purely a linear algebra problem without knowing linear algebra.
 
  • #7
Well, he's teaching me linear algebra right now, wouldn't you say? In the finest tradition of Bourbaki, he has yet to assign a problem that can't be solved without reference to external sources. After 60 harrowing, yet satisfying problems, I trust him to deliver me safely, as long as I do my part - and as long as I'm still getting traction on these end of chapter problems, I feel up to the challenge. Besides, I peeked ahead, and he's going to develop det() in ten pages.
 
  • #8
Well, personally I think it would be ill-advised to learn linear algebra "on the fly" from a calculus book. However, for this particular problem, I suggest the easiest way to prove the statement without using any elementary linear algebra results is the following:

It is true in general (for linear and non-linear functions) that for a function to have an inverse, it must be one-to-one from its domain to its image. If your Socratically-minded textbook author hasn't established this yet, then it's a very convoluted book indeed. So, consider the function from ##R^2## to ##R^2## defined by left multiplying a column vector by the matrix you've given. (i.e. ##f(x,y) = (ax + by, cx + dy)##). Argue that the matrix being invertible is equivalent to the function f having an inverse. Then construct, using the fact that ##ad=bc##, two different column vectors that get mapped to the same vector by f. Of course, their existence means f is not 1-1 and thus not invertible.

There are infinitely many possible pairs of vectors you could use, though there are a few especially simple choices. Can you see what they are?
 
  • #9
I'm not intentionally playing dumb. I've never done linear algebra, and my first encounter with it is in this book. I really don't know what the det() function is.

In the finest tradition of Bourbaki, he has yet to assign a problem that can't be solved without reference to external sources.

Is it just me that I find it odd that someone has heard of Bourbaki before the determinant?
 
  • #10
Thank you, LastOne. That was a great hint to a much better proof.

Considering the function [itex]f : \Re^{2} \to \Re^{2}[/itex] such that [itex] f(x) = Mx[/itex] where M = [itex]\left [
\begin{array}{cc}
a & b \\
c & d \\
\end{array}
\right]

[/itex]

We can write that [itex] f(x,y) = (ax + by, cx + dy) [/itex].Finding [itex]f^{-1}[/itex] is the same thing as finding [itex]M^{-1}[/itex].

Proof: [itex]
f^{-1}(f(x)) = (f^{-1} \circ f)(x) = M_{f^{-1}}M_{f}x \\

(f^{-1} \circ f)(x) = x \\

M_{f^{-1}}M_{f} x = x \\

M_{f^{-1}}M_{f} = I \\

M_{f^{-1}} = M_{f}^{-1} \\
[/itex]

If we prove [itex]f^{-1}[/itex] doesn't exist, then we've proven that [itex]M^{-1}[/itex] doesn't. All that is required to prove that an inverse for f doesn't exist is to find two different values in the domain that are mapped to the same value by f.

If we also have that ad = bc, there are such vectors.

[itex] f((d-b),(a-c)) = (ad - ab + ab - bc, cd - bc + ad - cd) = (ad - bc, ad - bc) = (0, 0) \\
f((b-d), (c-a)) = (ab - ad + bc - ab, bc - cd + cd - ad) = (bc - ad, bc - ad) = (0, 0)
[/itex]

So, e.g., for

M = [itex]\left [
\begin{array}{cc}
1 & 2 \\
3 & 6 \\
\end{array}
\right] [/itex], we could think of f(4, -2) and f(-4, 2).

In general any vectors k(d - b, a - c) and k(b - d, c - a) map to zero.
 
Last edited:
  • #11
rollingstein said:
Is it just me that I find it odd that someone has heard of Bourbaki before the determinant?

"Bourbaki" was quoted at the beginning of a chapter in Spivak, and I looked "him" up the other day. I've been planning my own course of self study, so that entails some amount of scholarship of the history of mathematics. I identify with the project that those men were working on: as I understand it, in the aftermath of WWI, their teachers were dead or fled, and they were essentially resolved to teach themselves mathematics with books and elbow grease.
 
Last edited:
  • #12
E'lir Kramer said:
Thank you, LastOne. That was a great hint to a much better proof.
Indeed. You're welcome. I had in mind the slightly simpler case of f(d,0) = f(0,c) but of course yours works perfectly well too.

Edit: Though your specific example is wrong. For starters, ##4 \not= 6##. I think the general case is probably fine, though I haven't worked it through.
 
Last edited:
  • #13
Doh. Fixed to a real example.
 

1. What is the meaning of the equation "If ad = bc"?

The equation "If ad = bc" refers to a mathematical relationship between the four elements of a 2x2 matrix. It means that the product of the diagonal elements (ad) is equal to the product of the off-diagonal elements (bc).

2. How does the equation "If ad = bc" relate to the inverse of a matrix?

The equation "If ad = bc" is a necessary condition for a 2x2 matrix to have an inverse. If this condition is not met, then the matrix will not have an inverse.

3. What is the significance of the inverse of a matrix?

The inverse of a matrix is a matrix that, when multiplied by the original matrix, results in the identity matrix. It is useful in solving systems of linear equations and in finding the solution to a matrix equation.

4. How can I determine if a matrix has no inverse?

A 2x2 matrix has no inverse if the equation "If ad = bc" is not satisfied. This can be checked by calculating the product of the diagonal elements and the product of the off-diagonal elements and comparing them. If they are not equal, then the matrix has no inverse.

5. Can a matrix with no inverse still be used in calculations?

Yes, a matrix with no inverse can still be used in calculations, but it cannot be inverted. This means that it cannot be used to solve systems of equations or to find the solution to a matrix equation. However, it can still be used in other operations such as matrix addition, subtraction, and multiplication.

Similar threads

  • Calculus and Beyond Homework Help
Replies
6
Views
883
  • Calculus and Beyond Homework Help
Replies
4
Views
1K
  • Calculus and Beyond Homework Help
Replies
18
Views
1K
  • Calculus and Beyond Homework Help
Replies
21
Views
834
  • Calculus and Beyond Homework Help
Replies
2
Views
380
  • Advanced Physics Homework Help
Replies
1
Views
709
  • Calculus and Beyond Homework Help
Replies
6
Views
286
  • Calculus and Beyond Homework Help
Replies
1
Views
919
  • Calculus and Beyond Homework Help
Replies
9
Views
1K
Replies
2
Views
968
Back
Top