Distance between Compact Subspaces: Proving Existence of Minimum Distance

  • Thread starter Thread starter fireisland27
  • Start date Start date
  • Tags Tags
    Subspaces
fireisland27
Messages
10
Reaction score
0

Homework Statement



For a metric space (M,d) and two compact subspaces A and B define the distance d(A,B) between these sets as inf{d(x,y): x in A and y in B}. Prove that there exists an x in A and a y in B such that d(x,y)=d(A,B).

Homework Equations




The Attempt at a Solution



I want to say that d can be thought of as a function from AxB to R and because A and B are compact AxB is compact. And so if I can show that d is continuous then it achieves it's minimum by the extreme value theorem. I'm not sure how to show the continuiuty however, or if this is even the proper way to go about this.
 
Physics news on Phys.org
Well, you want to show that if x_n\rightarrow x, y_n\rightarrow y then d(x_n,y_n)\rightarrow d(x,y), right?

Do you know already that d(x,y) is continuous in each variable separately?
 
I believe that if y is fixed then dy(x) = d(x,y) is continuous by a very simple epsilon delta proof. Is this what you mean? If so d(xn,y) converges to d(x,y). Does this automatically give us that d(xn,yn) converges to d(x,y)?
 
Start with |d(xn,yn)-d(x,y)|, add and subtract something clever, and use the triangle inequality.
 
Ok, I think it all comes together now. The only part I want to be sure of is, given a sequence {(x,y)n} in AxB which converges to (x,y), then is it as obvious as it seems that xn converges to x in A and yn converges to y in B? Does this require a proof?
If not than I think I can now show that for a sequence {(x,y)n} converging to (x,y), {d(x,y)n} converges to d(x,y) making d continuous and thus attaining its minimum on AxB which would be the distance between A and B. Does this all seem correct?
 
I think you should prove it. The general approach looks correct to me (but I didn't think very hard about it).
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top