So, as the title suggests, there's point about the proof of the Tychonoff theorem I don't quite get. The theorem in Munkres is based on the "closed set and finite intersection" formulation of compactness, which, after doing some google-ing, I found out to be a less formal version of the Cartan-Bourbaki proof, based on filter theory, which I don't yet know anything about. However, the proof in Munkres doesn't invoke the concept of filters explicitly, so I find it rather accessible at this stage. Before the proof itself two lemmas are presented, and the way one of them is laid out confuses me a bit. I'll quote the directly, for convenience. Lemma 37.1. Let X be a set, let A be a collection of subsets of X having the finite intersection property. Then there exists a collection D of subsets of X such that D contains A, and D has the finite intersection property, and no collection of subsets of X that properly contains D has this property. Lemma 37.2. Let X be a set, let D be a collection of subsets of X that is maximal with respect to the finite intersection property. Then: (a) Any finite intersection of elements of D is an element of D. (b) If A is a subset of X that intersects every element of D, then A is an element of D. Now, what I don't understant is this: In Lemma 37.1., Zorn's lemma is invoked in order to prove the existence of the collection D, by means of taking a specific collection A with a strict partial order (I won't go into detail), and proving that every subcollection B of A (referred to as "subsuperset" in the proof) has an upper bound in A. From that it follows that A as a maximal element D. But, I don't understand why it's stated in the Lemma that D has the finite intersection property, when this fact is proved in Lemma 37.2.? In the lemma only existence of D is proved, as far as I understand. Thanks in advance for any replies.
First, let me say that I strongly dislike the proof of the Tychonoff theorem by Munkres. It is unnecessary long. In fact, a proof by filters would only take about 2 lines (provided that you know some things about filters). And, moreover, filters are quite fun, so I'm a bit upset that Munkres left them out completely... Agreed, filters are a bit difficult when you see them for the first time, but they're not that hard... OK, after this rant, let me get to the question. Let me explain this in detail. Let [tex](X,\leq)[/tex] be a partially ordered set and let A be a subset of X. Then we call a maximal element of A if a belongs to a, and if there are no x in A such that [tex]a\leq x[/tex]. So, by definition, a maximal element belongs to the set. So, if we say that [tex]\mathcal{D}[/tex] is a maximal "superset", then this already implies that [tex]\mathcal{D}[/tex] has the finite intersection property. So the fact that [tex]\mathcal{D}[/tex] has the finite intersection property is already proven in lemma 37.1. So, lemma 37.1 proves that any finite intersection of elements of [tex]\mathcal{D}[/tex] is nonempty. But lemma 37.2 improves this result. It says that any finite intersection of elements of [tex]\mathcal{D}[/tex] is again in [tex]\mathcal{D}[/tex]. What lemma 37.2 states is not the finite intersection property, it is stronger than it...
Oh yeah, if you're interested, here is a clean proof of Tychonoff's theorem using filters: www.efnet-math.org/~david/mathematics/filters.pdf Now I know that this proof takes up 6 pages, while Munkres only requires 3. But advantage of filters is that it has a lot of applications (e.g. in topology, lattice theory, set theory, ideals in ring theory,...), so it's worth knowing about. In topology, a great deal of properties can be stated by filters: compactness, Hausdorff, T1, continuity, closure... So I don't understand why Munkres left this out... Sorry about the second rant
Oh yes, I missed this point! Thanks, I'll definitely look into this, it seems interesting at first sight.
I'm going throught this paper about filters you linked in post #3, and I really like it. Only one (perhaps stupid) question: For example, Definition 2 defines convergence of a filter. It what sense is this definition consistent with (i.e. implies?) convergence of sequences? Or doesn't it need to imply it?
Ah yes, that is a very good question. The paper I provided is a bit lacking in that respect. Sequences and series have a lot to do with eachother. For example, take a sequence [tex](x_n)_n[/tex], then we can always associate a filter with that (called the "fundamental filter associated with the sequence"). Namely, [tex]\mathcal{F}=\{F\subseteq X~\vert~\exists n:~\{x_m~\vert~m>n\}\subseteq F\}[/tex] So we just take the "tails" of the sequence (these are just all the elements x_{m} of a sequence with m greater then a certain n). And then we take all sets which contain a tail. It is quite easy to show that this is a filter. Now, the basic operations we can do with a sequence, can be translated to filters: - Subsequences: Take [tex](x_n)_n[/tex] a sequence and let [tex]\mathcal{F}[/tex] an associated filter. Let [tex](x_{k_n))_n[/tex] be a subsequence and let [tex]\mathcal{G}[/tex] be the associated filter. Then [tex]\mathcal{F}\subseteq \mathcal{G}[/tex]. Thus taking subsequences corresponds with inclusion of filters. - If [tex](x_n)_n[/tex] and [tex](y_n)_n[/tex] be two sequences. Then we can form the following sequence: x_0, y_0, x_1, y_1, x_2, y_2,... This is sometimes a useful operation. This operation corresponds to the intersection of filters. - Saying that a sequence is in a subset A (which means that every x_{n} is in A), corresponds to saying that [tex]A\in \mathcal{F}[/tex]. - Convergence of sequence: we have that a sequence converges if and only if the corresponding filter converges. - Adherence of filters: It isn't in the paper, but we can indeed define adherent points for filters. Then a filter adherates if and only if the corresponding filter adherates. Now, we see that properties of sequences can be translated to properties of filters. So everything you want to do with sequences, can also be done by filters. Now, why are filters handy? Because there are much more filters then sequences. Another handy property is that filters determine the topology. For example, in metric spaces the following property is true: x is in the closure of A if and only if there is a sequence in A that converges to x. So we can say that knowing all sequences is actually the same as knowing the topology. Now, in topological space, the above property fails (at least, it fails in space which are not first countable). Now, if we work with filters, then the property does remain true. Because we have: x is in the closure of A, if and only if there is a filter [tex]\mathcal{F}[/tex] such that [tex]A\in \mathcal{F}[/tex] and the filter converges to x. On an unrelated note. The fun thing with filters are the so called ultrafilters. An easy example of an ultrafilter are the point filters (these are filters of the form [tex]\{F~\vert~x\in F\}[/tex], so take all sets that contain x). These ultrafilters can be explicitely constructed. But the question is if there are ultrafilters which are not point filters (these are called "free ultrafilters"). The answer is that free ultrafilters can be proven to exist, but they cannot be explicitely constructed. So, if you would ask me an example of a free ultrafilter on [tex]\mathbb{R}[/tex], then I would be able to tell you that it exist, but I can never give you a concrete example of such a ultrafilter! The reason for this is the infamous axiom of choice, which some mathematicians still refuse to accept...
micromass, thanks for this detailed reply, it makes perfect sense. About the ultrafilter - yes, I saw this in the paper, after the proof of the existence of an ultrafilter, it is noted that if F is a filter containing the cofinite filter, then F cannot be generated by a singleton. By the way, is the axiom of choice "imfamous" among some mathematicians because it can lead to such proofs ("non-constructive proofs"), i.e. proofs showing existence of something but no explicit way or idea actually constructing it?
Yes, the axiom of choice can lead to very very strange situations. There has been a lot of controversy about the axiom in the start of the 20th century. You will use the axiom of choice a lot in mathematics, in some situation this can be benificial and it other situations, there can be strange things. One if these strange things is the very famous Banach-Tarski paradox. It roughly states that you can take a ball, split the ball up in 5 pieces, reassemble those pieves, and obtain 2 balls. And it appears that the axiom of choice is fundamental in it's proof. This is why some mathematicians refuse to accept the axiom. And the ones that do accept the axiom, have to reconcile with these strange things. There are a lot of other counterintuitive results. A full appreciation of this would require some set theory. Naturally, the axiom of choice also has a lot of benificial results. One example is the Tychonoff theorem. Another consequence is that all vector spaces have a basis. It is because of these results, that most mathematicians accept the axiom of choice...
Just a question about the proof of Theorem 7, Section 4 of the paper, direction "==>". It says: suppose x is a limit point of A, then {U[tex]\cap[/tex]A: U is in Nx} is a filter on A converging to x. If this filter converges to x, then it must contain all the neighborhoods of x, but x may not be in A, right? So I don't see how it contains all the neighborhoods of x.
Well, the author of the paper uses a new definition of convergence. The one given before proposition 4. Basically, we can extend a filter on A to a filter on X. And we say that the filter on A converges to x, if the extended filter converges. So, in theorem 7, we have the following filter [tex]\mathcal{F}=\{U\cap A~\vert~U\in N_x\}[/tex]. This is a filter on A. The extended filter on X, is [tex]\{B~\vert~\exists U\in N_x: U\cap A\subseteq B\}[/tex]. So we take all the sets in [tex]\mathcal{F}[/tex] and we adjoin all the greater elements. Now, the claim in theorem 7, is that this extended filter converges to x. So, take a neighbourhood U of x. Then we can say that [tex]U\cap A\subseteq U[/tex]. Thus by definition, the neighbourhood U lies in the extended filter. Thus the extended filter converges to x. By definition, the filter [tex]\mathcal{F}[/tex] now converges to x...
Ah, of course, I was sloppy again! I forgot about the definition of convergence of a filter in a subspace! Thanks! This was a very instructive paper. Actually, after this I definitely prefer this approach then the one in Munkres.
Yes, I think filters are quite fun! So I don't understand why Munkres left them out. Maybe he thinks that filters are too hard for an introductory course on topology. I can understand that, but still...
Yes, probably not too hard, but perhaps too much, since I guess there could be some other topics which should be included too, and this would make an even bigger book.
Yeah, you're right. It would be insane to include more than they already did. In the book "Encyclopedia of general topology", the authors have attemped to include everything of general topology. But only the definitions (no theorems, proof, examples, etc) take up over 500 pages!! And that doesn't even include algebraic topology...