1. Not finding help here? Sign up for a free 30min tutor trial with Chegg Tutors
    Dismiss Notice
Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

Toeplitz-Hausdorff Theorem

  1. Jan 7, 2016 #1
    1. The problem statement, all variables and given/known data
    Here is a link to the paper I am working through: http://www.ams.org/journals/proc/1970-025-01/S0002-9939-1970-0262849-9/S0002-9939-1970-0262849-9.pdf

    2. Relevant equations


    3. The attempt at a solution

    I am working on the first line of the proof. This is what I thus far understand: First they are relying on the fact that ##W(A)## is convex if and only if ##W(\mu A + \gamma I)##. Here is where I am unsure of things. I believe the first sentence is saying that we can stretch (or contract) the set by ##\mu## amount and translate it by ##\gamma## amount so that there exist vectors ##x_0## and ##x_1## such that ##\langle (\mu A + \gamma I)x_0,x_0 \rangle = 0## and ##\langle (\mu A + \gamma I)x_1,x_1 \rangle = 1## If this is true, then the problem can to reduce to assuming that we have an operator ##A## such that ##\langle Ax_0,x_0 \rangle = 0## and ##\langle Ax_1,x_1 \rangle = 1##.

    Is that a correct interpretation? The reason I ask is, because I am interested in justifying this step, and I want to know precisely what I am proving.
     
    Last edited: Jan 7, 2016
  2. jcsd
  3. Jan 7, 2016 #2

    fresh_42

    Staff: Mentor

    No. It relies on the fact that ##A## is linear and ##||x|| = 1##.
    Set ## <Ax'_0,x'_0> = c_0 \cdot <x'_0,x'_0> = c_0 ## and ## <Ax'_1,x'_1> = c_1 \cdot <x'_1,x'_1> = c_1 ## then you can define ##μ = (c_1 - c_0)^{-1}## and ##γ =c_0 (c_0 - c_1)^{-1}## to get the points ##x_0## and ##x_1##.
     
  4. Jan 7, 2016 #3

    Samy_A

    User Avatar
    Science Advisor
    Homework Helper

    Take arbitrary but different ##(Ax_1,x_1)## and ##(Ax_2,x_2)## in ##W(A)##.
    You can easily find ##\mu## and ##\gamma## such that ##((\mu A+\gamma I)x_1,x_1)=0## and ##((\mu A+\gamma I)x_2,x_2)=1##.
    As ##\mu W(A)+\gamma=W(\mu A + \gamma I)##, showing that the straight line segment joining ##((\mu A+\gamma I)x_1,x_1)## and ##((\mu A+\gamma I)x_2,x_2)## lies in ##W(\mu A + \gamma I)## (the convexity condition), will prove the similar statement for ##(Ax_1,x_1)## and ##(Ax_2,x_2)## in ##W(A)##.

    EDIT: fresh_42 was faster. :)
     
  5. Jan 8, 2016 #4
    These two posts appear to contradict each other, but hopefully one will correct me if I am wrong. Samy_A appears to be saying that we are indeed relying on the fact that ##\mu W(A) + \gamma = W(\mu A + \gamma I)## is convex iff ##W(A)##

    fresh_42: Why is ##<Ax'_0,x'_0> = c_0 \cdot <x'_0,x'_0>## true? Is this some property linear operators? I ask, because I couldn't find any such property in my searching on the internet.
     
    Last edited: Jan 8, 2016
  6. Jan 8, 2016 #5

    Samy_A

    User Avatar
    Science Advisor
    Homework Helper

    The way I understand their proof is as follows.
    They must show that the straight line segment joining ##(Ax_1,x_1)## and ##(Ax_2,x_2)## lies in ##W(A)## (here both vectors have norm 1).

    1) They prove it for the case that ##(Ax_1,x_1)=0## and ##(Ax_2,x_2)=1##.
    2a) They note that for the general case, one can find ##\gamma, \mu## such that ##((\mu A+\gamma I)x_1,x_1)=0##, ##((\mu A+\gamma I)x_2,x_2)=1##. Applying 1) to the operator ##\mu A+\gamma I##, it follows that the straight line segment joining ##((\mu A+\gamma I)x_1,x_1)## and ##((\mu A+\gamma I)x_2,x_2)## lies in ##W(\mu A+\gamma I)##.
    2b) ##((\mu A+\gamma I)x_1,x_1)=\mu (Ax_1,x_1)+\gamma##, ##((\mu A+\gamma I)x_2,x_2)=\mu (Ax_2,x_2)+\gamma##. It follows from 2a) that the straight line segment joining ##(Ax_1,x_1)## and ##(Ax_2,x_2)## lies in ##W(A)##.

    The difference between this outline and the paper is that they mention 2a) and 2b) first, and based on that they claim that it is sufficient to prove the particular case 1).
     
  7. Jan 8, 2016 #6

    fresh_42

    Staff: Mentor

    They don't contradict each other.

    What I tried to show is the explicit reduction from the general assertion to the proved statement. From arbitrary ##x'_i## to those with the desired properties.
    ##<Ax'_i,x'_i> = c_i \cdot <x'_i,x'_i>## is not "true". I defined the ##c_i## by it in order to find actual values for ##μ## and ##γ## (step (1) of Samy's answer).
    I was simply too lazy to type the fraction syntax into it: ##c_i := \frac{<Ax'_i,x'_i>}{<x'_i,x'_i>}## - Thank you for forcing me to do it anyway :wink:. And I hopefully made no calculation error.

    Samy further mentioned that the convexity condition doesn't change by affine transformations. They don't affect convexity since straight lines, which are used to define convexity, will undergo the same affine transformation staying straight lines.
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: Toeplitz-Hausdorff Theorem
  1. Hausdorff spaces (Replies: 8)

  2. Hausdorff Space (Replies: 1)

  3. Hausdorff Space (Replies: 5)

  4. Hausdorff Spaces (Replies: 0)

Loading...