Topology problem

mathwonk

Homework Helper
ok i think i see how to show that GL(n,R) has exactly two connected components using row and column operations.

i.e. i think i mentally connected each matrix with det = +1 to the identity and each matrix with det = -1 to the diagonal matrix with a single -1 in the top corner and all other diagonal entries +1.

now it should follow that every nbhd of a matrix with det=0 contains a mtrix of both kinds?

mathwonk

Homework Helper
sorry hurkyl i did not see your proof of this.

"For det A = 0, you simply need to add a sufficiently small multiple of diag{1, 1, ..., 1} to get a positive determinant, and diag{-1, 1, 1, ..., 1} to get a negative determinant."

is one i thought was true the other day but why is this true? that would do the openness problem of course immediately, right?

but i am worried that the taylor series for y = det(A+xB) might have only positive values for all x near zero. i.e. the curve A+xb might not cross from one component of GLn to the other.

what am i missing?

mathwonk

Homework Helper
this is what bothers me: take the diagonal 3 by 3 matrix with -1 in top corner and zeroes elsewhere. then add t times the dientity, the det is (t-1)t^2. this is not positive on any small nbhd of t =0. so your first statement is false.

Doodle Bob

mathwonk said:
this is what bothers me: take the diagonal 3 by 3 matrix with -1 in top corner and zeroes elsewhere. then add t times the dientity, the det is (t-1)t^2. this is not positive on any small nbhd of t =0. so your first statement is false.
The reason why you're having trouble here is that the set you describe above is not open in the space of 3 by 3 matrices.

Doodle Bob

mathwonk said:
let me try again doodlebob: suppose a map f has the property that for every point A, and every nbhd N of A, the image f(N) contains a nbhd of f(A).

then f is an open map.

i.e. let U be any open set in the domain, and f(U) its image. we wish to show that f(U) is open.

so let f(A) be any point of f(U), where A is any point of U, and we will show that f(U) contains a nbhd of f(A).

but this is trivial. i.e. U is itself a nbhd of A, so f(U) contains a nbhd of f(A) by hypothesis. done.

this is what I have almost completely done for the det map. do you agree?
The above is kosher. But your specific application of it to the situation at hand is a little slippery. Looking at the image of det(tA) for small enough t, works fine if the neighborhood U contains no A such that det(A) is not zero, and if n is odd. It does not work for n even, since that is precisely the case when the map t --> t^n is not open. In particular, the image of det(tA) will be [0,e) or (-e,0] for a small enough e>0.

Hurkyl

Staff Emeritus
Gold Member
t --> t^n is open near t = 1, though, and that's all that matters.

Aw phooey. I forgot it could have a nonzero eigenvalue. But, I have it sealed up now.

Use a basis in which A is in Jordan canonical form.

Then, you can choose your diagonal matrix to only have nonzero entries where A's diagonal entries are zero, and that way you can get a positive or negative determinant.

mathwonk

Homework Helper
i agree, doodle bob in fact i tried to point that problem out in post 27 above.

i am sure you all agree that i have given a 3 by 3 counter example to the following assertion:

"For det A = 0, you simply need to add a sufficiently small multiple of diag{1, 1, ..., 1} to get a positive determinant"

and I apologize for tapping your creative powers hurkyl to answer my dumb questions, not all of which are relevant to the current problem.

But it is very interesting to me, to know that G(n,R) has exactly two components, and to see that your proof is similar to my own, as that reassures me greatly as to the accuracy of my own insight.

it now seems clear however that the relevant point for this problem is your other assertion, given without proof, that the discriminant locus: {A:detA = 0} is the common boundary of both components of GL(n,R).

i.e. the assertion that det is an open map on all of matrix space is proved as follows:

if the scalars are complex it is immediate by restriction to any line thorugh A on which det is not constant, by the openness of non constant analytic functions.

if we restrict to real A, with det not zero, i.e. to GL(n,R), it is open by my trivial homogeneity argument.

near an A with detA = 0, it is not as obvious to me by any trivial argument, but provable as follows:

the point is to show that on every nbhd of A with detA = 0, det changes signs.

first consider the "rank filtration" of the discriminant locus.

1) the locus where A has rank = n-1 is open and dense in the discriminant locus ( = locus where rank is less than n). this seems easy by arguments above of A+tB type.

2) the rank n-1 locus is the smooth locus on the hypersurface {det=0}, i.e. at every point where A has rank n-1, the locus {det=0} is a manifold of dimension n-1, with a well defined n-1 dimensional tangent space. this is "well known".

3) at a point A where A has rank n-1, if we restrict det to a line normal to the tangent plane to this manifold, then det has non zero derivative at A, hence changes sign on every nbhd of A.

i claim this doos it. what do you think?

[the problem with naive A+tB arguments near a point where A has rank less than n-1, is to show the line A+tB actually passes from one "side" of the discriminant locus to the other, as indeed my 3 by 3 example shows it need not do.]

Last edited:

mathwonk

Homework Helper
well as usual hurkyl, i failed to see your post before posting my own, because i did not "turn the page". anyway it is fun to see two different arguments.

oh by the way jordan canonical form does not exist over the reals, the only place where the problem is interesting.

i am extremely proud of my solution in the post above but i assume hurkyl is not reading it for the same reason i did not read his, he wants to give his own solution.

Last edited:

Hurkyl

Staff Emeritus
Gold Member
Aw phooey. Guess it shows I really need to actually learn something about Jordan Canonical Form. Okay, I'll try again:

Row reduce A.
Add the appropriate sufficiently small diagonal matrix to produce a positive or negative determinant.
Invert the row reduction.

Voila, a recipe for generating matrices near A with positive and negative determinants.

mathwonk

Homework Helper
row reduction changes the sign of the determinant.

Last edited:

snoble

But it does it in a predictable way. So if it switches the sign then the small matrix you added to give a negative determinant will actually give a positive determinant after you undo the row reduction. But the small matrix that you added to give a positive determinant fixes that. To formallize a little take P=diag{1,1,1,...,1} and N=diag{1,1,1,...,-1}. Let S be your singular matrix and let A be the invertable matrix such that AS is upper triangular with only non-negatives on the diagonal. Then for t>0,
det(AS+tN)<0<det(AS+tP).
And so $$det(A^{-1})\cdot det(AS+tN)=det(S + tA^{-1}N)\ne 0, det(A^{-1})\cdot det(AS+tP)=det(S + tA^{-1}P)\ne 0$$ and they have opposite signs. But t can be arbitraly small... small enough so that $$S + tA^{-1}N$$ is inside an epsilon ball of S. Ok so not that formal.

Bah I probably should have just kept my nose out of your guys conversation.

Cheers,
Steven

kakarukeys

is the question really that difficult?

Doodle Bob

kakarukeys said:
is the question really that difficult?
No, it is not.

Simply induct on n: the size of the matrix.

n=1 is trivial.

Then the determinant of an (n+1)x(n+1) matrix is given by $\Sigma_j (-1)^{j+1}(x_{1j} det(A_{1j}))$ where $x_{11},...,x_{1(n+1)}$ is the top row of the matrix, and $A_{1j}[/tex] are the nxn submatrices of M formed by taking out the first row and jth column (sorry, I can't remember the name of these things). To show the result then, one needs only to argue that the image of the above sum on the open set [itex](a_{11},b_{11})\times ... \times (a_{(n+1)(n+1)}, b_{(n+1)(n+1)})$ will be open itself. There's a little work here but it's mostly taken care of by induction and the fact that an open set in R times (literally, multiplied by) another open set in R is an open set.

Thus, a clunky but elementary proof that involves no other knowledge about matrices except how to calculate the determinant. Plus no worrying about zero determinants

Hurkyl

Staff Emeritus
Gold Member
row reduction changes the sign of the determinant.
I was thinking "using just the operation of adding a row to another", I just forgot to say it this time. :tongue:

But as snoble points out, a particular redction sequence will either always preserve the sign, or always flip the sign, so this restriction isn't vital.

Last edited:

mathwonk

Homework Helper
oh isee, that doesn't matter since you then undo the reduction by the same processes! I think you've done it in an elementary way! nice work as usual.

(I realized you were right as soon as i left the room and lay down, but did not have the will to get up again.)

(the one operation you mention cannot actually reduce a matrix to echelon form, but as noted your argument works anyway.)

Last edited:

kakarukeys

The proof by the two advisors is really beyond my head... :grumpy:

Doodle Bob said:
To show the result then, one needs only to argue that the image of the above sum on the open set $(a_{11},b_{11})\times ... \times (a_{(n+1)(n+1)}, b_{(n+1)(n+1)})$ will be open itself. There's a little work here but it's mostly taken care of by induction and the fact that an open set in R times (literally, multiplied by) another open set in R is an open set.
I don't understand, please elaborate.
I have shown earlier that the product of two open maps may not be open map.

mathwonk

Homework Helper
the map det is open if on every nbhd of the matrix A, detA takes on values both larger and smaller than det(A).

surely this argument is not over your head for the case where detA is non zero. i.e. in every nbhd of A there are matrices of form tA with t both less than 1 and greater than 1.

then det(tA) = t^n detA, takes values less than detA and rgeater than detA.

that was my original argument.

to do the case of detA = 0, hurkyl said choose an invertible matrix E such that EA is upper triangular. this is called row reduction.

then since the det of an upper triangular matrix is merely the product of the entries on the diagonal, some diagonal entry is zero. to get a non zero determinant, change all zero diagonal entries to small positive numbers. to get a det of the opposite sign change one of those entries to a small negative number.

this shows that in every nbhd of EA there are matrices with both positive and negative determinant.

then the same holds for A because multiplying by E^(-1) just changes all the determinants by multiplying by the same number detE.

doodle bobs argument looks short because he does not actually give it.

Last edited:

Doodle Bob

kakarukeys said:
The proof by the two advisors is really beyond my head... :grumpy:

I don't understand, please elaborate.
I have shown earlier that the product of two open maps may not be open map.

What I mean is the following: given two open sets U and V in R, if we define W to be the set W={xy: x in U and y in V}, i.e. the set of the products of elements in U and elements in V. Then W has to be open. Essentially this boils down to showing that the set of products of two open intervals is in fact an open interval, e.g. U=(-1, 1) and V=(0,5) yields W=(-5, 5).

I absolutely admit that my "proof" is not complete. However, the main hook is there: we can write the determinant of an (n+1)x(n+1) matrix as the sum of n variables ($x_{11}, ..., x_{1n}$) times the determinants of nxn matrices. Using the open sets that I described, we can look at each addend and see that the image will be an open set (using the fact that nontrivial linear maps are open and that, by induction, the determinant on nxn matrices is also open).

Let me reiterate that this is the skeleton of the proof. It needs to be fleshed out some.

Hurkyl

Staff Emeritus
Gold Member
So you're claiming that multiplication is an open map from UxV --> W...

Anyways, there is a problem with your approach: notice that the function:

f(x, y) = xy - xy

is precisely of the form you describe, but it is obviously not an open map!

Doodle Bob

Hurkyl said:
So you're claiming that multiplication is an open map from UxV --> W...

Anyways, there is a problem with your approach: notice that the function:

f(x, y) = xy - xy

is precisely of the form you describe, but it is obviously not an open map!
Actually, no, it is not. I said, nontrivial linear functions, i.e. not including the zero function, which you will never get on an open set of matrices.

Hurkyl

Staff Emeritus
Gold Member
That is a nontrivial linear function: I'm composing the function f(x, y) = x - y with the functions g(x, y) = xy and h(x, y) = xy to get f(g(x, y), h(x, y)) = 0.

Doodle Bob

Well, if a linear function is identically zero, that's pretty trivial to me. But nevertheless let me give you an updated form of my proof. I call this an elementary proof, since all it uses is the definition of the determinant and induction.

First, we start with two lemmas, the proofs of which are super-easy and I will leave to the reader.

Lemma 1: Let $O_1$ and $O_2$ be open sets in R. Then the set $O=\{x+y\in R: x\in O_1, y\in O_2\}$ is an open set of R.

Lemma 2: Let $O_1$ and $O_2$ be open sets in R. Then the set $O=\{xy\in R: x\in O_1, y\in O_2\}$ is an open set of R.

Let $det_n$ be the determinant function on $n\times n$ matrices. We will use induction to prove the following claim.

Claim: $det_n$ is an open mapping for all natural numbers n.

n=1: this is trivial, since $det_1$ is the identity map on R

Suppose the claim is true for $det_1, ..., det_{n-1}.$

Let U be the open set in $R^{n^2}$ given by $U=(a_{11},b_{11})\times … \times (a_{nn},b_{nn}),$ where $a_{ij}<b_{ij}\forall\ i,\ j.$

For each i, j, set $V_{ij}=(a_{ij},b_{ij}),$ an open interval in R, and let $W_{ij}=\pi_{ij}(U)\subset R^{(n-1)^2}$, where $\pi_{ij}:\ M_{n\times n}(R) =R^{n^2} \rightarrow M_{(n-1)\times (n-1)}(R) =R^{(n-1)^2}$ is the projection map that takes out the $i^{th}$ row and $j^{th}$ column of an $n\times n$ matrix. Note that $W_{ij}$ is an open set of $R^{(n-1)^2}$, since it can be written in the form of $(c_{11},d_{11})\times … \times (c_{(n-1) (n-1)},d_{(n-1) (n-1)})$, where $c_{kl}<d_{kl}\forall\ k,\ l$.

Then, for any $A=(A_{ij})\in U$, $det_n(A)=\Sigma_{j=1}^n (-1)^{j+1}A_{1j} det_{n-1}(\pi_{1j}(A))$. So, the image of the determinant on U can be written as
$det_n(U)=\Sigma_{j=1}^n (-1)^{j+1}det_1(V_{1j}) det_{n-1}(W_{1j})$.

By induction, both maps, $det_1$ and $det_{n-1}$, are open maps. So, that each of the terms $det_1(V_{1j})$ and $det_{n-1}(W_{1j})$ for any $j=1, ..., n,$ are open sets in R. Thus, by the two lemmas, $det_n(U)$ is an open set in R.

That pretty much does it.

mathwonk

Homework Helper
thats a lot better, but you still seem to be missing the key point of hurkyls example.

i.e. in the definition of det(U) you have to use the same matrix in every term of the formula for det.

But your application of your lemma requires you to be able to vary the matrices independently over the different terms.

so although i am beginning to believe you, i do not think you have proved your claims, especially the key one in the formula for the image of det(U).

i.e. by hurkyl's example, although the sum of two open sets is always open, the sum of two open maps is not always open, and you are apparently using that false statement in your proof.

i.e. although 0 = xy-yx, it is not true that the image of an open interval under the map 0, equals the difference of its images under xy and yx.

so you seem to be assuming that the image of an open rectangle under the map det, equals the sum of its images under the terms of your sum.

this requires proof since it can fail.

but you have noit used any hypotheses that distinguish your situation from hurkyl's.

i.e. you have to use somehow that the terms in the sum do not cancel each other in any such unfortunate way.

and even this inadequate proof is longer than hurkyls correct one.

i admit the mistake is subtle. you had me convinced for a minute there.

Last edited:

Doodle Bob

mathwonk said:
thats a lot better, but you still seem to be missing the key point of hurkyls example.

i.e. in the definition of det(U) you have to use the same matrix in every term of the formula for det.

But your application of your lemma requires you to be able to vary the matrices independently over the different terms.
It's bit more subtle than that. The proof uses the fact that there isn't really just one determinant function. There is a different determinant function defined for each size of a square matrix. By using induction, we can already assume that the determinant functions defined on the smaller sized matrices (ii.e. of sizes 1x1, 2x2, ..., (n-1)x(n-1)) are open mappings. In particular, if W is any open set of $R^{k^2}$ for k=1, ..., n-1, then det(W) is open by the induction hyphothesis.

Using this induction hypothesis, the lemmas, and the fact that the determinant on nxn matrices is actually defined recursively as the sum of determinants of smaller matrices, we get that the determinant on nxn matrices is open as well.

Quoth mathwonk:

"so although i am beginning to believe you, i do not think you have proved your claims, especially the key one in the formula for the image of det(U).

"i.e. by hurkyl's example, although the sum of two open sets is always open, the sum of two open maps is not always open, and you are apparently using that false statement in your proof.

"i.e. although 0 = xy-yx, it is not true that the image of an open interval under the map 0, equals the difference of its images under xy and yx.

"so you seem to be assuming that the image of an open rectangle under the map det, equals the sum of its images under the terms of your sum.

"this requires proof since it can fail."

Be careful here. Nowhere do I use that the addition or multiplication of open *mappings* is open; I say that the "addition" and "multiplication" of open *sets* (as defined in the lemmas) are open sets.

This boils down to: let (a,b), (c,d) be non-empty intervals in R. Then the set $O_+$ defined by $O_+=\{x+y\in R: x\in(a,b), y\in (c,d)\}$ is the open interval (a+c, b+d), the set $O_\times$ defined by $O_\times=\{xy\in R: x\in(a,b), y\in (c,d)\}$ is actually the open interval $(min\{ac, ad, bc, bd\},max\{ac, ad, bc, bd\})$.

This needs some proving and one then has to show that it's true for a general open set. But that's easy enough.

Nevertheless, by writing det(U) as the sum of products of open sets in R (using again the induction hyphothesis), we have that det(U) is open.

"but you have noit used any hypotheses that distinguish your situation from hurkyl's.

"i.e. you have to use somehow that the terms in the sum do not cancel each other in any such unfortunate way.

"and even this inadequate proof is longer than hurkyls correct one.

"i admit the mistake is subtle. you had me convinced for a minute there."

The terms may cancel each other out at individual points of the open set of U, but that doesn't matter. What matters is that the image of U is an open set of R. And the proof shows that.

Incidentally, hurkyl's example never pops up. The variables x_11, ..., x_1n show up exactly once in the definition of the determinant I'm using, since the other smaller matrices do not use the first column at all. They are the submatrices formed by taking out the 1st column and the jth row. And, we don't have to worry about the determinants of the smaller matrices because of induction.

As for the length, I wasn't concerned so much about that as what machinery was involved. As is usually true in math, the more machinery, the shorter the proof. I just knew that there was a proof that did not involve Jordan canonical forms.

Last edited:

mathwonk

Homework Helper
Doodle Bob, the following claim of yours:

"Nevertheless, by writing det(U) as the sum of products of open sets in R (using again the induction hyphothesis), we have that det(U) is open."

is exactly what you have not proved. I.e. in your 5th and 6th sentences from the end, you essentially say: "because the function det(n) is a sum of prodicts of determinants of lower degree, it follows that the image det(n)(U) is the analogous sum of products of the images of U under those lower degree determinants."

this is equivalent to claiming that the sum of products of those open maps is itself an open map. that may be true but you have not proved it nor even argued it.

since it is not true that the sum of products of arbitrary open maps is open, you still need to prove it is true for your case.

and hurkyls proof does not use jordan forms, only the elementary fact that for any matrix A there exists an invertible matrix E such that EA is upper diagonal. this follows from row reduction, the first technique taught in matrix theory.

and my proof for the special case that detA>0 say, uses only that if 0<t<1, then t^n detA < detA, while if t>1 then t^n detA) > detA.

it doesn't get much shorter or more elementary than that.

let me make my objection to your proof as simple as possible:

just because h = f+g, it does not follow that h(U) = f(U)+g(U), which is what you are apparently claiming in your argument. this is what hurkyl's counter example shows.

i.e. h(U) consists only of all points of form f(x)+g(x) for all x in U, while f(U)+g(U) consits of the much larger collection of points of form f(x)+g(y) for all x,y in U.

I.e. f(U) +g(U) is the image of the product set UxU under the map f+g, while h(U) is the image only of the diagonal.

If I am misunderstanding you, and I may be, please tell me how.

Last edited:

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving