Toeplitz-Hausdorff Theorem

  • Thread starter Bashyboy
  • Start date
  • Tags
    Theorem
I was able to understand what they were proving, but I couldn't follow the reasoning behind it. I am trying to understand the proof because the theorem is used again in the next paragraph to prove the theorem about self-adjoint operators. In any case, I think I understand their reasoning now. I just needed to hear it in different words. Thanks a lot Samy and fresh_42.In summary, the conversation is about someone working through a paper and trying to understand a proof about convex sets. The paper relies on the fact that a set is convex if and only if it can be stretched or translated by certain amounts. The person is unsure about this step and asks for clarification. Samy_A and fresh_42 explain the
  • #1
Bashyboy
1,421
5

Homework Statement


Here is a link to the paper I am working through: http://www.ams.org/journals/proc/1970-025-01/S0002-9939-1970-0262849-9/S0002-9939-1970-0262849-9.pdf

Homework Equations

The Attempt at a Solution


[/B]
I am working on the first line of the proof. This is what I thus far understand: First they are relying on the fact that ##W(A)## is convex if and only if ##W(\mu A + \gamma I)##. Here is where I am unsure of things. I believe the first sentence is saying that we can stretch (or contract) the set by ##\mu## amount and translate it by ##\gamma## amount so that there exist vectors ##x_0## and ##x_1## such that ##\langle (\mu A + \gamma I)x_0,x_0 \rangle = 0## and ##\langle (\mu A + \gamma I)x_1,x_1 \rangle = 1## If this is true, then the problem can to reduce to assuming that we have an operator ##A## such that ##\langle Ax_0,x_0 \rangle = 0## and ##\langle Ax_1,x_1 \rangle = 1##.

Is that a correct interpretation? The reason I ask is, because I am interested in justifying this step, and I want to know precisely what I am proving.
 
Last edited:
Physics news on Phys.org
  • #2
Bashyboy said:
I am working on the first line of the proof. This is what I thus far understand: First they are relying on the fact that ##W(A)## is convex if and only if ##W(\mu A + \gamma I)##.
No. It relies on the fact that ##A## is linear and ##||x|| = 1##.
Set ## <Ax'_0,x'_0> = c_0 \cdot <x'_0,x'_0> = c_0 ## and ## <Ax'_1,x'_1> = c_1 \cdot <x'_1,x'_1> = c_1 ## then you can define ##μ = (c_1 - c_0)^{-1}## and ##γ =c_0 (c_0 - c_1)^{-1}## to get the points ##x_0## and ##x_1##.
 
  • #3
Bashyboy said:

Homework Statement


Here is a link to the paper I am working through: http://www.ams.org/journals/proc/1970-025-01/S0002-9939-1970-0262849-9/S0002-9939-1970-0262849-9.pdf

Homework Equations

The Attempt at a Solution


[/B]
I am working on the first line of the proof. This is what I thus far understand: First they are relying on the fact that ##W(A)## is convex if and only if ##W(\mu A + \gamma I)##. Here is where I am unsure of things. I believe the first sentence is saying that we can stretch (or contract) the set by ##\mu## amount and translate it by ##\gamma## amount so that there exist vectors ##x_0## and ##x_1## such that ##\langle (\mu A + \gamma I)x_0,x_0 \rangle = 0## and ##\langle (\mu A + \gamma I)x_1,x_1 \rangle = 1## If this is true, then the problem can to reduce to assuming that we have an operator ##A## such that ##\langle Ax_0,x_0 \rangle = 0## and ##\langle Ax_1,x_1 \rangle = 1##.

Is that a correct interpretation? The reason I ask is, because I am interested in justifying this step, and I want to know precisely what I am proving.
Take arbitrary but different ##(Ax_1,x_1)## and ##(Ax_2,x_2)## in ##W(A)##.
You can easily find ##\mu## and ##\gamma## such that ##((\mu A+\gamma I)x_1,x_1)=0## and ##((\mu A+\gamma I)x_2,x_2)=1##.
As ##\mu W(A)+\gamma=W(\mu A + \gamma I)##, showing that the straight line segment joining ##((\mu A+\gamma I)x_1,x_1)## and ##((\mu A+\gamma I)x_2,x_2)## lies in ##W(\mu A + \gamma I)## (the convexity condition), will prove the similar statement for ##(Ax_1,x_1)## and ##(Ax_2,x_2)## in ##W(A)##.

EDIT: fresh_42 was faster. :)
 
  • #4
fresh_42 said:
No. It relies on the fact that ##A## is linear and ##||x|| = 1##.
Set ## <Ax'_0,x'_0> = c_0 \cdot <x'_0,x'_0> = c_0 ## and ## <Ax'_1,x'_1> = c_1 \cdot <x'_1,x'_1> = c_1 ## then you can define ##μ = (c_1 - c_0)^{-1}## and ##γ =c_0 (c_0 - c_1)^{-1}## to get the points ##x_0## and ##x_1##.

Samy_A said:
Take arbitrary but different ##(Ax_1,x_1)## and ##(Ax_2,x_2)## in ##W(A)##.
You can easily find ##\mu## and ##\gamma## such that ##((\mu A+\gamma I)x_1,x_1)=0## and ##((\mu A+\gamma I)x_2,x_2)=1##.
As ##\mu W(A)+\gamma=W(\mu A + \gamma I)##, showing that the straight line segment joining ##((\mu A+\gamma I)x_1,x_1)## and ##((\mu A+\gamma I)x_2,x_2)## lies in ##W(\mu A + \gamma I)## (the convexity condition), will prove the similar statement for ##(Ax_1,x_1)## and ##(Ax_2,x_2)## in ##W(A)##.

EDIT: fresh_42 was faster. :)

These two posts appear to contradict each other, but hopefully one will correct me if I am wrong. Samy_A appears to be saying that we are indeed relying on the fact that ##\mu W(A) + \gamma = W(\mu A + \gamma I)## is convex iff ##W(A)##

fresh_42: Why is ##<Ax'_0,x'_0> = c_0 \cdot <x'_0,x'_0>## true? Is this some property linear operators? I ask, because I couldn't find any such property in my searching on the internet.
 
Last edited:
  • #5
The way I understand their proof is as follows.
They must show that the straight line segment joining ##(Ax_1,x_1)## and ##(Ax_2,x_2)## lies in ##W(A)## (here both vectors have norm 1).

1) They prove it for the case that ##(Ax_1,x_1)=0## and ##(Ax_2,x_2)=1##.
2a) They note that for the general case, one can find ##\gamma, \mu## such that ##((\mu A+\gamma I)x_1,x_1)=0##, ##((\mu A+\gamma I)x_2,x_2)=1##. Applying 1) to the operator ##\mu A+\gamma I##, it follows that the straight line segment joining ##((\mu A+\gamma I)x_1,x_1)## and ##((\mu A+\gamma I)x_2,x_2)## lies in ##W(\mu A+\gamma I)##.
2b) ##((\mu A+\gamma I)x_1,x_1)=\mu (Ax_1,x_1)+\gamma##, ##((\mu A+\gamma I)x_2,x_2)=\mu (Ax_2,x_2)+\gamma##. It follows from 2a) that the straight line segment joining ##(Ax_1,x_1)## and ##(Ax_2,x_2)## lies in ##W(A)##.

The difference between this outline and the paper is that they mention 2a) and 2b) first, and based on that they claim that it is sufficient to prove the particular case 1).
 
  • #6
Bashyboy said:
These two posts appear to contradict each other, but hopefully one will correct me if I am wrong. Samy_A appears to be saying that we are indeed relying on the fact that ##\mu W(A) + \gamma = W(\mu A + \gamma I)## is convex iff ##W(A)##

fresh_42: Why is ##<Ax'_0,x'_0> = c_0 \cdot <x'_0,x'_0>## true? Is this some property linear operators? I ask, because I couldn't find any such property in my searching on the internet.

They don't contradict each other.

What I tried to show is the explicit reduction from the general assertion to the proved statement. From arbitrary ##x'_i## to those with the desired properties.
##<Ax'_i,x'_i> = c_i \cdot <x'_i,x'_i>## is not "true". I defined the ##c_i## by it in order to find actual values for ##μ## and ##γ## (step (1) of Samy's answer).
I was simply too lazy to type the fraction syntax into it: ##c_i := \frac{<Ax'_i,x'_i>}{<x'_i,x'_i>}## - Thank you for forcing me to do it anyway :wink:. And I hopefully made no calculation error.

Samy further mentioned that the convexity condition doesn't change by affine transformations. They don't affect convexity since straight lines, which are used to define convexity, will undergo the same affine transformation staying straight lines.
 

What is the Toeplitz-Hausdorff Theorem?

The Toeplitz-Hausdorff Theorem is a mathematical theorem in functional analysis that relates to the properties of Toeplitz matrices.

Who discovered the Toeplitz-Hausdorff Theorem?

The theorem was first proved by mathematicians Otto Toeplitz and Felix Hausdorff in the early 20th century.

What is the significance of the Toeplitz-Hausdorff Theorem?

The Toeplitz-Hausdorff Theorem is important in the field of functional analysis as it provides a way to determine if a matrix is a Toeplitz matrix, which has many applications in signal processing and image processing.

Can the Toeplitz-Hausdorff Theorem be generalized to higher dimensions?

Yes, the Toeplitz-Hausdorff Theorem can be generalized to higher dimensions, known as the Toeplitz-Hausdorff Theorem for multidimensional matrices.

What are some practical applications of the Toeplitz-Hausdorff Theorem?

The Toeplitz-Hausdorff Theorem has various applications in fields such as electrical engineering, physics, and computer science. It is used in image and signal processing, coding theory, and numerical analysis, among others.

Similar threads

Replies
2
Views
861
  • Quantum Physics
Replies
1
Views
727
  • High Energy, Nuclear, Particle Physics
Replies
6
Views
2K
  • Advanced Physics Homework Help
Replies
3
Views
2K
Replies
1
Views
1K
  • Advanced Physics Homework Help
Replies
1
Views
696
  • Advanced Physics Homework Help
Replies
1
Views
1K
Replies
2
Views
996
  • Introductory Physics Homework Help
Replies
4
Views
990
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
694
Back
Top