Does Det(AB) = 0 Imply Det(A) or Det(B) Must Be Zero?

hkus10
Messages
50
Reaction score
0
1) If det(AB) = 0, is det(A) or det(B) = 0? Give reasons for your answer.

Q1) First, cannot both det(A) or det(B) be 0? If it can, is this statement false. In any case, how can I prove that this is true for all statement since I only know how to find an example to show this is true, which cannot represent all the possibility.

2) Show that if A is singular and Ax = b, b is not equal to 0, has one solution, then it has infinitely many.

Q2) How to approach this question?

3) Let A^2 = A. Prove that either A is singular or det(A) = 1.

Q3) How can I approach this question?
 
Physics news on Phys.org
hkus10 said:
1) If det(AB) = 0, is det(A) or det(B) = 0? Give reasons for your answer.

Q1) First, cannot both det(A) or det(B) be 0? If it can, is this statement false. In any case, how can I prove that this is true for all statement since I only know how to find an example to show this is true, which cannot represent all the possibility.
First, if det(A) = det(B) = 0, then the statement is most certainly true. "Or" does not mean "one or the other, but not both", it means at least one. What do you know about dets? For example, can you use the fact the det(AB)=det(A)det(B), if you can, this should be easy, so you probably can't. But, tell us what you are using as your definition of det, because there are various ways to define it.

[QUOTE
2) Show that if A is singular and Ax = b, b is not equal to 0, has one solution, then it has infinitely many.

Q2) How to approach this question?[/QUOTE]
How does singularity relate to the det? Work with that to see if it leads anywhere.

3) Let A^2 = A. Prove that either A is singular or det(A) = 1.

Q3) How can I approach this question?
Again, how does singularity (what does that mean, first of all) relate to determinents? That is if A is singular, what can you tell about its det?
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...

Similar threads

Replies
11
Views
2K
Replies
18
Views
3K
Replies
2
Views
7K
Replies
5
Views
5K
Replies
9
Views
2K
Replies
12
Views
3K
Back
Top