Baby rudin theorem 2.36 explanation

In summary, the theorem states that given a collection of compact subsets of a metric space X such that the intersection of every finite subcollection is nonempty, then the intersection of the entire collection is nonempty. The proof is very simple and I easily follow the abstract reasoning. However, I think that there is a deeper intuition behind this example which I cannot quite seem to figure out.
  • #1
jecharla
24
0
Theorem 2.36 says that given a collection of compact subsets of a metric space X such that the intersection of every finite subcollection is nonempty, then the intersection of the entire collection is nonempty. The proof is very simple and I easily follow the abstract reasoning. However, I think that there is a deeper intuition behind this example which I cannot quite seem to figure out.

Could anyone provide a more concrete explanation as to why it is the property of compactness that results in this property?
 
Physics news on Phys.org
  • #2
that is not a theorem, that is an equivalent restatement of the definition. rudin has a wonderful way of making trivial things seem hard. i really dislike that book.

i.e. the definition of X compact says: if U Xi = X, Xi open, then for some finite subset Xj of the Xi, UXj = X.

But two sets are equal iff their complements are equal, So this says:

if complem( U Xi) = complem X, then for a finite subset Xj of Xi, complem(UXj) = complem

But the complement of a union is the intersection of the complements, thus:

if intersection(complem.Xi) = ø, then for some finite subset Xj of the Xi,
intersection(complem.Xj) = ø.

Now the contrapositive of this is:

if for every finite subset Xj of the Xi we have intersection(complem.)Xj ≠ ø,

then for the full collection Xi, we have intersection(complemen.Xi) ≠ ø.

But a set is open iff its complement is closed, so this becomes:

if Zi is a collection of closed sets, such that: for every finite subset Zj of the Zi we have
intersectionZj ≠ ø,

then for the full collection Zi, we have intersectionZi ≠ ø.so this is just a really tedious and silly way of rendering a trivial statement as hard to grasp as possible. can't you find a book you like better than that one? or is some analysis professor torturing you by assigning that book to read?
 
  • #3
mathwonk said:
that is not a theorem, that is an equivalent restatement of the definition. rudin has a wonderful way of making trivial things seem hard. i really dislike that book.

i.e. the definition of X compact says: if U Xi = X, Xi open, then for some finite subset Xj of the Xi, UXj = X.

But two sets are equal iff their complements are equal, So this says:

if complem( U Xi) = complem X, then for a finite subset Xj of Xi, complem(UXj) = complem

But the complement of a union is the intersection of the complements, thus:

if intersection(complem.Xi) = ø, then for some finite subset Xj of the Xi,
intersection(complem.Xj) = ø.

Now the contrapositive of this is:

if for every finite subset Xj of the Xi we have intersection(complem.)Xj ≠ ø,

then for the full collection Xi, we have intersection(complemen.Xi) ≠ ø.

But a set is open iff its complement is closed, so this becomes:

if Zi is a collection of closed sets, such that: for every finite subset Zj of the Zi we have
intersectionZj ≠ ø,

then for the full collection Zi, we have intersectionZi ≠ ø.


so this is just a really tedious and silly way of rendering a trivial statement as hard to grasp as possible. can't you find a book you like better than that one? or is some analysis professor torturing you by assigning that book to read?



The above characterization of compact set is not really that hard/messy after one has already developed some topological abstract

intuition, and it is very necessary for the proof I know of Tychonoff Theorem, one of the jewels in topology.

DonAntonio
 
  • #4
mathwonk said:
so this is just a really tedious and silly way of rendering a trivial statement as hard to grasp as possible. can't you find a book you like better than that one? or is some analysis professor torturing you by assigning that book to read?

++

I agree with this.. although I did like Rudin at one point, I think it was just the thrill of the first chapter being kinda neat.
I've never really found a decent book for that leve of analysis so I'm lacking a certain amount of groundwork there when I do stuff at a slightly higher level.

Would you be kind enough to reccomend a better rudin replacement?
Pretty please :blushing:
 
  • #5
we have often recommended analysis books other than Rudin. anything by Sterling K Berberian, Foundations of modern analysis is also hard to read, but much more rewarding than Rudin, and other people younger than I are probably better at suggesting more recent books. Search the thread on books, or the who wants to be a math guy thread.

In fact it looks to me as if the first 8 chapters of rudin are just at the level of a good single variable calculus book. so i would suggest the two books of spivak, or the two volumes of apostol, or the two volumes, or even just the first volume, of lang's two books, i.e. analysis I. I also like fleming's calculus of several variables.

But just because I dislike rudin, you may still get some benefit from it, especially if you enjoy the slick way he does some things.

And DonAntonio is right, that theorem 2.36 is really very easy in a sense. I.e. if you understand complementation well, and contrapositives, those two statements are as I said, exactly equivalent. I just don't care for someone pretending as rudin does that this is a significant result when it is only a trivial restatement, with many double negatives, of the same thing.

I just think that it should be presented in a way that makes clear that it is trivial, not calling it a theorem. and kelley gives two proofs of tychonoff, one using each version.
 
Last edited:
  • #6
Thanks to DonAntonio's remark, I may have an idea to explain why rudin includes this result, but i cannot explain why he gives no intuition for it, that is his trademark.In my whole career I have only used the open set version, but that is because I am using compactness instead of proving it. Once you prove something like Tychonoff's theorem, say for me 45 years ago, you never prove it again, you just apply it.

So in my experience one essentially always prefers the open set version of compactness when applying the property. But perhaps when proving compactness, one often may want to use proof by contradiction, and that's where the closed set version could come in.

I.e. if we want to prove compactness by contradiction, as most proofs of Tychonoff seem to do, then we start by assuming we have a family of open sets such that no finite subset covers. I.e. for every finite subcollection there is a point outside those finite sets, or equivalently in the complement of those finite sets, i.e. in all their complements, thus in the intersection of their (closed) complements.

So the contradiction of the conclusion of compactness is that we have a collection of closed sets, namely the complements of the original family, such that every finite subcollection of these closed sets has a point in common. Then if we can prove all the sets have a point in common, we have proved the contrapositive of compactness, which is equivalent to compactness.

So Rudin's 2.36 is just the statement of the contrapositive of the definition, but made a little more complicated by also considering the complements of the sets as well as the contrapositive of the logical statement.

But of course he doesn't say so. Notice that although pages 36-37 of rudin do have some explanation and intuition, pages 38-40 have essentially none, just a list of theorems and proofs.

His fans consider this "elegant" but I consider it lazy and unhelpful writing. At least it is unhelpful to beginners, although if you already know the material, as professional analysts who prefer this book do, then it may be useful to see everything laid out efficiently and briefly. But I think his book is aimed at learners, of which only a few seem to benefit.
 
Last edited:

What is theorem 2.36 in Baby Rudin and why is it important?

Theorem 2.36 in Baby Rudin is also known as the Bolzano-Weierstrass theorem, and it states that every bounded sequence in a metric space has a convergent subsequence. This is an important theorem in analysis because it helps prove the existence of limits and plays a crucial role in many other theorems and proofs.

How is theorem 2.36 proven in Baby Rudin?

The proof of theorem 2.36 in Baby Rudin uses the properties of compactness, completeness, and nested intervals. It involves constructing a sequence of nested closed intervals and using the Bolzano-Weierstrass property to show the existence of a convergent subsequence.

Can theorem 2.36 be applied to all metric spaces?

Yes, theorem 2.36 can be applied to all metric spaces. This is because it is a fundamental result in metric space topology and does not depend on any specific properties of the metric space.

Are there any real-world applications of theorem 2.36?

Yes, theorem 2.36 has many real-world applications, particularly in physics and engineering. It is used to prove the convergence of numerical methods in calculations and to show the existence of solutions in differential equations.

Can theorem 2.36 be extended to higher dimensions?

Yes, theorem 2.36 can be extended to higher dimensions. In fact, there are versions of this theorem for sequences in Euclidean spaces and general topological spaces. However, the proof may become more complicated in higher dimensions.

Similar threads

  • Topology and Analysis
Replies
5
Views
2K
  • Topology and Analysis
Replies
7
Views
2K
  • Calculus and Beyond Homework Help
Replies
3
Views
1K
  • Topology and Analysis
Replies
10
Views
4K
Replies
16
Views
9K
  • Calculus and Beyond Homework Help
Replies
4
Views
2K
Replies
13
Views
5K
Replies
2
Views
3K
  • Calculus
Replies
6
Views
5K
Back
Top