Why this paradox in calculating eigen values for T*T ?

In summary: T*T has an eigen value |λ|2 for the same eigen vector v which T possesses. This can be inferred because :Suppose f is a linear functional on V . Then there is a unique vector v in V such that f(u) =<u,v> for all u in V.( A linear functional on V is a linear map from V to the scalars F , As sheldon axler pg 117 says it ) The above argument must imply then that : T*T v = |λ|2 v ( Doesn't this imply v is an eigen vector of T*T ? )Proof : Suppose there exists an another different quantity w≠
  • #1
vish_maths
61
1
Let T be an operator on the vector space V and let λ1, ... , λn be it's eigen values including multiplicity .

Lets find the eigen values for the operator T*T then ( where T* refers to the adjoint operator . <u,v> denotes inner product of u and v )

< Tv , Tv > = < λv, λv >
= λλ° < v , v >
= |λ|2 < v , v>

( where λ° is the conjugate of λ . )

=> <T*T v, v> = < |λ|2 v, v >

=> T*T has an eigen value |λ|2 for the same eigen vector v which T possesses.

This can be inferred because :

Suppose f is a linear functional on V . Then there is a unique vector v in V such that f(u) =<u,v> for all u in V.

( A linear functional on V is a linear map from V to the scalars F , As sheldon axler pg 117 says it ) The above argument must imply then that : T*T v = |λ|2 v ( Doesn't this imply v is an eigen vector of T*T ? )

now, let's consider a matrix $$M(T) = \begin{bmatrix}
1 & 3 \\
0 & 2
\end{bmatrix}$$

[ 3 1 ]T is clearly an eigen vector with eigen value = 2 .

(T* T ) =\begin{bmatrix} 1 & 0 \\ 3 & 2 \end{bmatrix} multiplied by \begin{bmatrix}
1 & 3 \\
0 & 2
\end{bmatrix}$$

= \begin{bmatrix}
1 & 3 \\
3 & 13
\end{bmatrix}$$

M(T* T ) upon multiplication with [ 3 1 ]T should produce a vector equal to

4 [ 3 1 ]T

however , \begin{bmatrix}
1 & 3 \\
3 & 13
\end{bmatrix} multiplied by [ 3 1 ]T = [ 6 22 ]T .

Can you please advise why this paradox exists ? Am i making a mistake somewhere ?
Thanks
 
Last edited:
Physics news on Phys.org
  • #2
I don't understand the bra-ket notation, but I think

T*T has an eigen value |λ|2 for the same eigen vector v which T possesses.
is wrong.

The eigenvectors of a matrix and its (conjugate) transpose are different in general. Your proof seems to assume they are the same. If ##Tx = \lambda x##, then ##x^*T^* = \lambda^* x^*##, but that doesn't make ##x^*## a (right) eigenvector of ##T^*##. (It is a left eigenvector of ##T^*##, of course).
 
  • #3
AlephZero said:
I don't understand the bra-ket notation,
There's no bra-ket notation there, just inner products, absolute values, and the definition of the adjoint operator.

vish_maths said:
< Tv , Tv > = < λv, λv >
= λλ° < v , v >
= |λ|2 < v , v>

( where λ° is the conjugate of λ . )

=> <T*T v, v> = < |λ|2 v, v >

=> T*T has an eigen value |λ|2 for the same eigen vector v which T possesses.
Why would ##\langle T^*Tv,v\rangle=\langle|\lambda|^2v,v\rangle## imply that ##T^*Tv=|\lambda|^2 v##?

I see that you tried to prove this, but the argument doesn't seem to make sense. There appears to be something missing from it as well. f(u)= what? Did you mean f(u)=v for all u? That actually implies that v=0.

vish_maths said:
This can be inferred because :

Suppose f is a linear functional on V . Then there is a unique vector v in V such that f(u) = for all u in V.

( A linear functional on V is a linear map from V to the scalars F , As sheldon axler pg 117 says it ) The above argument must imply then that : T*T v = |λ|2 v ( Doesn't this imply v is an eigen vector of T*T ? )
 
  • #4
Note that if we let w be any vector that's orthogonal to v, we have ##\langle T^*Tv,v\rangle =\langle T^*Tv+w,v\rangle##.
 
  • #5
AlephZero said:
I don't understand the bra-ket notation, but I think

is wrong.

The eigenvectors of a matrix and its (conjugate) transpose are different in general. Your proof seems to assume they are the same. If ##Tx = \lambda x##, then ##x^*T^* = \lambda^* x^*##, but that doesn't make ##x^*## a (right) eigenvector of ##T^*##. (It is a left eigenvector of ##T^*##, of course).

Thanks. <u,v> as fredrik pointed refers to the inner product of u and v . I get your argument regarding the adjoint operation.


Fredrik said:
There's no bra-ket notation there, just inner products, absolute values, and the definition of the adjoint operator.


Why would ##\langle T^*Tv,v\rangle=\langle|\lambda|^2v,v\rangle## imply that ##T^*Tv=|\lambda|^2 v##?

I see that you tried to prove this, but the argument doesn't seem to make sense. There appears to be something missing from it as well. f(u)= what? Did you mean f(u)=v for all u? That actually implies that v=0.

Fredrik said:
Note that if we let w be any vector that's orthogonal to v, we have ##\langle T^*Tv,v\rangle =\langle T^*Tv+w,v\rangle##.

Hi Fredrik,
Thanks for the answer. i have corrected it in the main thread , f(u) = <u,v> for all u in V.

I think the equation < Tx , y > = < z, y > => Tx = z if and only if < Tx , y > = < z, y > is valid for all y in V. ( w[itex]\in[/itex] V ) .

Lets consider the more general case of < w , y > = < z , y > , then w = z if and only if this relation is valid for all y [itex]\in[/itex] V.

Proof : Suppose there exists an another different quantity w≠z | < w , y > = < z , y >

Then : < w - z, y > = 0 for all y [itex]\in[/itex] V .
=> when y = w - z , then :

<w-z,w-z> = 0 => w = z .
 
  • #6
<T*T v, v> = < |λ|2v, v > -------- (1)

=> I guess , T*T has an eigen value |λ|2 for the same eigen vector v which T possesses , if and only if (1) is valid for ALL v [itex]\in[/itex] V .
 
  • #7
vish_maths said:
<T*T v, v> = < |λ|2v, v > -------- (1)

=> I guess , T*T has an eigen value |λ|2 for the same eigen vector v which T possesses , if and only if (1) is valid for ALL v [itex]\in[/itex] V .
If it holds for all v, I think the correct conclusion is that T*T is equal to ##|\lambda|^2I## where I is the identity operator. But more importantly, as you know, you have only proved that the equality holds for a specific v.
 
  • #8
vish_maths said:
<T*T v, v> = < |λ|2v, v > -------- (1)

=> I guess , T*T has an eigen value |λ|2 for the same eigen vector v which T possesses , if and only if (1) is valid for ALL v [itex]\in[/itex] V .

It is clearly not true for all ##v\in V## since you took a very special ##v##, namely an eigenvector.
 
  • #9
Fredrik said:
There's no bra-ket notation there, just inner products, absolute values, and the definition of the adjoint operator.

Fair comment, but (as an ex-mathematician turned engineer) I never use the < > notation for inner products and have to think at least twice what "adjoint operator" means :smile:

But in any case the problem here is with the math, not with the notation!
 
  • #10
Fredrik said:
If it holds for all v, I think the correct conclusion is that T*T is equal to ##|\lambda|^2I## where I is the identity operator. But more importantly, as you know, you have only proved that the equality holds for a specific v.

micromass said:
It is clearly not true for all ##v\in V## since you took a very special ##v##, namely an eigenvector.

AlephZero said:
Fair comment, but (as an ex-mathematician turned engineer) I never use the < > notation for inner products and have to think at least twice what "adjoint operator" means :smile:

But in any case the problem here is with the math, not with the notation!

Got it. Didn't keep into consideration all v in V have to satisfy the condition . Thanks !
 
Last edited:

1. Why do we need to calculate eigen values for T*T?

Calculating eigen values for T*T is important because it helps us understand the behavior and properties of linear transformations. It can also be used to solve systems of linear equations and study the stability of dynamic systems.

2. What is the paradox in calculating eigen values for T*T?

The paradox arises when we try to calculate the eigen values of T*T using the traditional method of multiplying the eigen values of T. This method does not always yield the correct results and can lead to contradictions and inconsistencies.

3. How can we resolve this paradox?

The paradox can be resolved by using the concept of the spectral theorem. This theorem states that for a symmetric matrix, the eigen values can be calculated by diagonalizing the matrix. This method ensures that the eigen values of T*T are always real and positive.

4. Are there any other methods of calculating eigen values for T*T?

Yes, there are other methods such as the power method, QR algorithm, and the Jacobi method that can be used to calculate eigen values for T*T. These methods may be more computationally efficient and accurate for certain types of matrices.

5. What are the practical applications of calculating eigen values for T*T?

Calculating eigen values for T*T has many applications in fields such as physics, engineering, and data analysis. It can be used to study the behavior of dynamic systems, analyze the stability of structures, and identify important features in large datasets.

Similar threads

  • Linear and Abstract Algebra
Replies
5
Views
968
Replies
27
Views
1K
  • Linear and Abstract Algebra
Replies
6
Views
1K
  • Linear and Abstract Algebra
Replies
6
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
1K
  • Linear and Abstract Algebra
Replies
4
Views
1K
  • Quantum Physics
2
Replies
43
Views
2K
  • Linear and Abstract Algebra
Replies
3
Views
945
  • Linear and Abstract Algebra
Replies
2
Views
1K
  • Differential Geometry
Replies
21
Views
656
Back
Top