Prove Isometry: Exists Orthonormal Basis V w/ \|Se_j\|=1

  • Thread starter Thread starter CrazyIvan
  • Start date Start date
  • Tags Tags
    Isometry Proof
CrazyIvan
Messages
45
Reaction score
0

Homework Statement


Prove or give a counterexample: if \mathcal{S} \in \mathcal{L} \left( V \right) and there exists and orthonormal basis \left( e_{1} , \ldots , e_{n} \right) of V such that \left\| \mathcal{S} e_{j} \right\| = 1 for each e_{j}, then \mathcal{S} is an isometry.

Homework Equations


\mathcal{S} is an isometry if
\left\| \mathcal{S} v \right\| = \left\| v \right\|
for all v \in V.

The Attempt at a Solution


I figure that if I can prove that \mathcal{S} is normal, then I can use the Spectral Theorem to prove that all every basis vector is an eigenvector.

Then I can prove that the square of the absolute value of the eigenvalue for each eigenvector equals 1, and the proof comes easy after that.

But I'm getting hung up on proving if it is normal.

Thanks in advance for any help.
 
Physics news on Phys.org
You don't need eigenvalues. Write v in terms of the basis: \vec{v}= a_1\vec{e_1}+ a_2\vec{e_2}+ \cdot\cdot\cdot+ a_n\vec{e_n}. What is its length? What is Lv? What is its length?

You say \left\| \mathcal{S} e_{j} \right\| = 1. I would have interpreted \mathcal{S} \in \mathcal{L} \left( V \right) to mean "linear transformations from V to itself". Do you mean the dual space of V: linear transformations from V to R?
 
HallsofIvy said:
You don't need eigenvalues. Write v in terms of the basis: \vec{v}= a_1\vec{e_1}+ a_2\vec{e_2}+ \cdot\cdot\cdot+ a_n\vec{e_n}. What is its length? What is Lv? What is its length?

You say \left\| \mathcal{S} e_{j} \right\| = 1. I would have interpreted \mathcal{S} \in \mathcal{L} \left( V \right) to mean "linear transformations from V to itself". Do you mean the dual space of V: linear transformations from V to R?

Sorry for my lack of clarity. \left\| \mathcal{S} e_{j} \right\| = 1 is a given property of the linear transformation \mathcal{S}. So the only thing I know about \mathcal{S} is that it's a linear transformation and it doesn't change the length of the orthonormal basis vectors.


I think I have a solution, but I'm not sure if it's the same thing you were saying:

\left\| S e_{i} \right\| = \left\| e_{i} \right\|
\left\langle S e_{i} , S e_{i} \right\rangle = \left\langle e_{i} , e_{i} <br /> \right\rangle
\left\langle e_{i} , S^{*} S e_{i} \right\rangle = \left\langle e_{i} , e_{i} \right\rangle
\left\langle e_{i} , S^{*} S e_{i} - e_{i} \right\rangle = 0
So since e_{i} is not the zero vector,
S^{*} S e_{i} - e_{i} = 0 \ \Leftrightarrow S^{*} S e_{i} = e_{i}
And from there, it's easy.

Does that look right?
 
CrazyIvan said:
Sorry for my lack of clarity. \left\| \mathcal{S} e_{j} \right\| = 1 is a given property of the linear transformation \mathcal{S}. So the only thing I know about \mathcal{S} is that it's a linear transformation and it doesn't change the length of the orthonormal basis vectors.


I think I have a solution, but I'm not sure if it's the same thing you were saying:

\left\| S e_{i} \right\| = \left\| e_{i} \right\|
\left\langle S e_{i} , S e_{i} \right\rangle = \left\langle e_{i} , e_{i} <br /> \right\rangle
\left\langle e_{i} , S^{*} S e_{i} \right\rangle = \left\langle e_{i} , e_{i} \right\rangle
\left\langle e_{i} , S^{*} S e_{i} - e_{i} \right\rangle = 0
So since e_{i} is not the zero vector,
S^{*} S e_{i} - e_{i} = 0 \ \Leftrightarrow S^{*} S e_{i} = e_{i}
And from there, it's easy.

Does that look right?

That doesn't prove (S*)S(e_i)-e_i=0. It just shows (S*)S(e_i)-e_i is orthogonal to e_i. I would suggest you take some time out from trying to prove it and put a little more energy into looking for a counterexample.
 
There are two things I don't understand about this problem. First, when finding the nth root of a number, there should in theory be n solutions. However, the formula produces n+1 roots. Here is how. The first root is simply ##\left(r\right)^{\left(\frac{1}{n}\right)}##. Then you multiply this first root by n additional expressions given by the formula, as you go through k=0,1,...n-1. So you end up with n+1 roots, which cannot be correct. Let me illustrate what I mean. For this...
Back
Top