Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

A function on a compact set is compact

  1. Mar 28, 2005 #1
    If f : U → R is continuous on U, and E ⊂ U is closed and bounded, then
    f attains an absolute minimum and maximum on E.

    How do you prove this theorem? I asked about this a while ago, but now I have a chance to redo the assignment and I need to fix my proof. I started by proving it for the 1 dimensional case. Here it goes:

    for [tex]F:U\subset\Re\rightarrow\Re[/tex]

    It is easy to show by continuity that everypoint has an epsilon neighborhood on which the function is bounded. I will only show this if someone asks. so if we consider f on the interval (a,b), then consider the set of x such that f is bounded on (a,x) and x is in (a,b). From what was just said, the set is non-empty. Let c be the largest upper bound of this set. Assume c<b, than f is bounded on (a,c-e), but since it is also bounded on (c-e,c+e) it is bounded on (a,c+e). This contradicts our choice of c, therefore c=b, thus f is bounded on (a,b-e) but also on (b-e,b) and therefore on (a,b).

    To show that F achieves its bounds, consider [tex]g(x)=\frac{1}{M-f(x)}[/tex]

    where M is either the upper or lower bound of f. Then if f(x) does not equal M on some point in (a,b), then g(x) is continuous on (a,b) and thus, by argument above is bounded on (a,b). This clearly cannot be true therefore f(x)=M for some x in (a,b)

    My problem is in trying to extend this to the n-dimensional case. First of all can it be extended? If it can, my problem is how do you use the trick of using contradiction to show that the function is bounded from one boundary point to another one, because the boundary is no longer just 2 points? What I had handed in was that since the function is bounded on an epsilon neighborhood of every point, take the union of such neighborhoods to be a covering of E by open sets, call it K. Than F is bounded on K, since it is a union of sets on which F is bounded. Since E is contained in K, F is therefore bounded on E. Than the same argument used before can be used to show that F attains its bounds.

    The problem with this proof is that I am wrong in saying that F is bounded on K because it is a union of sets on which F is bounded. This is because that is not necessarily true if it is a union of infinite sets.

    Is there any way to show that a covering of E on which F is bounded can be made by a union of a finite number of sets on which F is bounded? Or is there no way to salvage this proof?

    My difficulty here is that we have been taught little to no topology (most of the terminology used in this post i learnt on my own on the internet) and so I need a proof that doesn't depend on too many other theorems. We've basically been given the definitions of open, closed, bounded, compact, continuous and limit, and that's about it.

    Thanks for any and all help.
  2. jcsd
  3. Mar 28, 2005 #2
    If you know the following:

    "A subset of R is compact if and only if it is closed and bounded"

    "Images of compact sets under a continuous function are compact"

    Then you can use this to say:

    E is compact. Therefore, f(E) is compact. Therefore, f(E) is closed and bounded. f(E) is bounded -> f(E) has a least upper bound and greatest lower bound in R. But f(E) is closed -> it contains all it's limit points. Therefore, the lub and glb are IN f(E). Therefore, f attains it's min and max on E.

    Hope that was clear, I was trying to get it out fast.

    edit: If you want evidence for any of the statements made, please just say so and I'll try when I get the chance
    Last edited: Mar 28, 2005
  4. Mar 29, 2005 #3

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    I guess from reading the post that the OP doesn't know that the image of a continuous function on a compact space is compact, which follows from the fact that f is continuous if and only if the inverse image of an open set is open.

    Let f:U --> V be continuous and U be compact.

    Let K_a be an open cover of f(U), and consider the preimages f^{-1}(K_a). These are an open cover of U, hence by defintion it can be refined to a finite subcover,

    f^{-1}(K_1), ..., f^{-1}(K_n)

    but then K_1,..,K_n is an open cover of f(U) so f(U) is compact.

    If we take U and V to be subsets of R^n then compact is equivalent to closed and boudned (heine borel), and thus joeboo's proof takes over.
  5. Mar 29, 2005 #4
    Yes the original problem doesn't know that the image of a continuous function on a compact set is compact. Just a few little questions. First of all why the subscript a on the K_a? Secondly, why does f(U) having a finite open cover imply that it is compact? Is this the actual definition of compactness? The only definition we were given of compact is that it means that the set is closed and bounded. Incase this is all we're allowed to work with, how do you show that the two are equivalent?

    My prof is really haphazardly covering the material, and then expecting quite a bit of us. It's a vector calculus class, so this stuff is more of a sidenote, and is not in any of our books. I don't have time to learn the stuff properly right now so that will have to wait until the summer. Anyone have any suggestions for good books/webpages to get started on this stuff? Thanks again.
  6. Mar 29, 2005 #5
    The traditional definition of compactness is a topological one. If you aren't familiar with topology, but have a strong interest in furthering your understanding of mathematics, I'd strongly suggest picking up a book on it. A book that I would recommend would be https://www.amazon.com/exec/obidos/...f=sr_1_1/103-0648298-1931040?v=glance&s=books
    The Topological definition of compact can seem somewhat daunting at first, but a closer understanding of it reveals the true strength that being compact implies. To simplify the definition ( which matt and myself were using in our posts ), you can think of it like this:
    A set K in a space X ( in your case a set U in R ) is compact if no matter how you try and cover it with a collection of open sets, you can always do it with some finite subcollection.
    However, from what you say, you're working with a very primitive definition of compactness, namely, that a set K in R is compact iff it is closed and bounded. To further assist you, it would be helpful to know what exact definition of "closed set" you are using.
    Let us know that, and I'm sure we can help you.

    If you want to know more about Topology, I'd love to help you. You can contact me via the email or ICQ# provided in my profile on this forum. Or, you can just keep posting here.

    Last edited by a moderator: May 2, 2017
  7. Mar 29, 2005 #6
    The definition of closed set we are using is that the set contains all of it's limit points( if sequence {xi}i from 1 to infinity is in U, then lim(as i goes to infinity) of the sequence is in U).
    I am definately very interested in learning more, but probably can't get into it until after finals, at the end of April. I will check the library for the book you recommended. Hopefully I will be taking a class on point set topology second term of next year, depending on wether or not it is offered, but I would like to get a head start.
  8. Mar 30, 2005 #7

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    If you've only been doing it interms of sequences, here's the method of th proof you need.

    Suppose that x_n is a sequence in f(U). If f(U) is unbounded then we can pick x_n such that |x_n|>n.

    Since x_n is in the image of f, let y_n be the preimage in U of x_n. This is a sequence in a compact set, ie closed and bounded. By Bolzano Weierstrass this has a convergent subsequence. We may suppose that z_n is the subsequence converging to z and that z is in U, by the assumptions on U. But f is continuous, so f(z_n) is a convergent subsequence of x_n but that's a contradiction from the way we constructed x_n. So U must be bounded, and since z is in U, f(z) is in f(U) so all sequqnces have their limit points in f(U).
  9. Mar 30, 2005 #8
    I talked to my teacher today, and he said we're not allowed to assume Bolzano Weirstrass theorem, we must prove it also. I can see how to prove this in R, simply divide your interval in half, then at least one side has infinitely many terms, divide it again, and so on so that you have an interval approaching a point which has infinitely many terms in it. I can see how this would also work for R^n, but since you are no longer working with nice intervals, my problem is only in notation. How would you mathematically explain this process in R^n?
  10. Mar 30, 2005 #9

    matt grime

    User Avatar
    Science Advisor
    Homework Helper

    Simple: look at the first component of the vectors, it is a sequence lying in a closed boudned sub set of R It has a convergent subsequence. passing to this subsequence, now look at the the second component. repeat as necessary..
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook