Another positive operator proof

In summary: This implies that there is a 0 eigenvalue.Now let's say that the vector we map is v=a(1 1 1) (a=/=0). T(1 0 0 )+T(0 1 0 ) wouldnever equate to a(1 1 1) no matter what T is since a(1 1 1) is not within the basis for the range.This is what I mean by "not being able to get v back". This applies to all vectors with threenonzero entries.
  • #1
evilpostingmong
339
0

Homework Statement


Suppose that T is a positive operator on V. Prove that T is invertible
if and only if <Tv,v > is >0 for every v ∈ V \ {0}.


Homework Equations





The Attempt at a Solution


If T is invertible, then TT-1=I.Now let v=v1+...+vn and let Tv=a1v1+...+anvn. Now <Tv, v>=<a1v1, v>+...+<anvn, v>. Applying T-1 we get <T-1(a1v1), v>+...+<T-1(anvn),v> =<v1, v>+...+<vn, v>=<Iv, v>=<v, v>. Since v[tex]\notin[/tex]{0},<v, v> is >0. And since T is invertible,
<Tv, v>=<v, v> if T=I, or <Tv, v> > <v, v> if T=/=I. Therefore <Tv, v> is >=<v, v>.
And since <v, v> is >0, <Tv, v> is >0.
 
Physics news on Phys.org
  • #2
I don't know how to handle the other direction. All we know is that <Tv, v> is >0.
T doesn't have to be invertible. It can be a projection and still be >0.
 
  • #3
You know more than <Tv,v> >=0 for a positive operator. You also know T is self-adjoint. You stated the definition in a previous thread. If it weren't the theorem wouldn't be true. Take T to be rotation by 90 degrees in R^2. Then <Tv,v>=0. But T is invertible. But it's not self-adjoint.
 
  • #4
Dick said:
You know more than <Tv,v> >=0 for a positive operator. You also know T is self-adjoint. You stated the definition in a previous thread. If it weren't the theorem wouldn't be true. Take T to be rotation by 90 degrees in R^2. Then <Tv,v>=0. But T is invertible. But it's not self-adjoint.
Alright for the first part the only thing that I will change is the definition of <Tv, v>.
Since we know T is self adjoint, and postive, T has a positive square root. So
<S^2v, v>=<v, S^2v>.
I have no idea how to do the other direction.
 
Last edited:
  • #5
evilpostingmong said:
Alright for the first part the only thing that I will change is the definition of <Tv, v>.
Since we know T is self adjoint, and postive, T has a positive square root. So
<S^2v, v>=<v, S^2v>.
I have no idea how to do the other direction.

The reason you know T has a square root is because you know there is an orthogonal basis of the vector space all of which are eigenvectors of T. What can you say about the values of the eigenvalues?
 
  • #6
Dick said:
The reason you know T has a square root is because you know there is an orthogonal basis of the vector space all of which are eigenvectors of T. What can you say about the values of the eigenvalues?

All of the eigenvalues should be >=0 so it follows that each eigenvalue has a positive square root. I take it that this is
important because if an eigenvalue is negative, then the square root would be imaginary. So
we'd have 0-i with its conjugate 0+i. This wouldn't allow self adjointness.
 
Last edited:
  • #7
evilpostingmong said:
All of the eigenvalues should be >=0. If we have even one eigenvalue <0,
then <Tv, v> would be negative. So all of T's eigenvalues must be >=0 so it follows
that each eigenvalue has a positive square root.

Zero isn't positive. Who cares about a "square root"?? I thought you were trying to prove something about when T is invertible.
 
  • #8
Dick said:
Zero isn't positive. Who cares about a "square root"?? I thought you were trying to prove something about when T is invertible.

what should I do now?
 
  • #9
evilpostingmong said:
what should I do now?

Ack! Think about it! If T has a zero eigenvalue, is it invertible??
 
  • #10
Dick said:
Ack! Think about it! If T has a zero eigenvector, is it invertible??

no. If Tv=a1v1+...+0*vn, you're not going to get v back . If Tv=a1v1+...+anvn (no eigenvalue is 0) then you can
get v back by dividing each eigenvalue by itself. Sorry for being lazy, its a hot day. I got to pull my weight here.
 
Last edited:
  • #11
evilpostingmong said:
no. If Tv=a1v1+...+0*vn, you're not going to get v back . If Tv=a1v1+...+anvn (no eigenvalue is 0) then you can
get v back by dividing each eigenvalue by itself. Sorry for being lazy, its a hot day. I got to pull my weight here.

That's pretty incomprehensible.
 
  • #12
Dick said:
That's pretty incomprehensible.

Its like this: Say the domain has a basis {(1 0 0), (0 1 0), (0 0 1)}.
Now say T maps from here to a range with basis {(1 0 0) (0 1 0)}. So the
matrix is 3x3 with a 0 row at the bottom. This implies that there is a 0 eigenvalue.
Now let's say that the vector we map is v=a(1 1 1) (a=/=0). T(1 0 0 )+T(0 1 0 ) would
never equate to a(1 1 1) no matter what T is since a(1 1 1) is not within the basis for the range.
This is what I mean by "not being able to get v back". This applies to all vectors with three
nonzero entries.
 
  • #13
evilpostingmong said:
Its like this: Say the domain has a basis {(1 0 0), (0 1 0), (0 0 1)}.
Now say T maps from here to a range with basis {(1 0 0) (0 1 0)}. So the
matrix is 3x3 with a 0 row at the bottom. This implies that there is a 0 eigenvalue.
Now let's say that the vector we map is v=a(1 1 1) (a=/=0). T(1 0 0 )+T(0 1 0 ) would
never equate to a(1 1 1) no matter what T is since a(1 1 1) is not within the basis for the range.
This is what I mean by "not being able to get v back". This applies to all vectors with three
nonzero entries.

You know if Tv=0*v for v not zero, then v is ker(T). If ker(T) is not {0} then T is not invertible. You KNOW that. You don't have to try to reprove that fact with awkward bad examples everytime you need it.
 
  • #14
The problem is that when given <Tv, v> is >0, T doesn't have to be invertible.
T can be an orthogonal projection with all eigenvalues >0 (except for say, one eigenvalue being 0)
and still be >0. I mean we know that <Tv, v> is >0 but we don't know whether or not it is invertible.
But it is still possible for <Tv, v> to be >0 when T is invertible.
 
  • #15
evilpostingmong said:
The problem is that when given <Tv, v> is >0, T doesn't have to be invertible.
T can be an orthogonal projection with all eigenvalues >0 (except for say, one eigenvalue being 0)
and still be >0. I mean we know that <Tv, v> is >0 but we don't know whether or not it is invertible.
But it is still possible for <Tv, v> to be >0 when T is invertible.

That's a pretty poor example of a mapping where <Tv,v> >0. <Tv,v>=0 for the vectors that are projected out. What's your point?
 
  • #16
Dick said:
That's a pretty poor example of a mapping where <Tv,v> >0. <Tv,v>=0 for the vectors that are projected out. What's your point?

Lets say v=(1 1 1). Now Tv=(1 1 0 ) with (0 0 1) being in nullT.
Now this is an orthogonal projection since the inner product between (1 1 0) and
(0 0 1) is 0. But the inner product between (1 1 0 ) and (1 1 1) or <Tv, v> is
greater than 0. Orthogonal projections are not invertible. But the
inner product <Tv, v> is >0. The problem is using <Tv, v> >0 to prove that
T is invertible.
 
  • #17
evilpostingmong said:
Lets say v=(1 1 1). Now Tv=(1 1 0 ) with (0 0 1) being in nullT.
Now this is an orthogonal projection since the inner product between (1 1 0) and
(0 0 1) is 0. But the inner product between (1 1 0 ) and (1 1 1) or <Tv, v> is
greater than 0. Orthogonal projections are not invertible. But the
inner product <Tv, v> is >0. The problem is using <Tv, v> >0 to prove that
T is invertible.

Look. The problem says prove T is invertible if <Tv,v> >0 for EVERY v not equal to zero (and T self-adjoint). A projection with a nontrivial kernel DOES NOT have that property. Period. End of discussion. Read the problem statement again, several times.
 
  • #18
Ok let <Tv, v> be >0. Now let v=e1+...+en where ek is an eigenvector of T from
an orthonormal basis on V. Since T is positive and self adjoint, let T be a diagonal matrix with one positive eigenvalue for each ek. Should I assume this? wait nevermind, the eigenvalues should be nonnegative.
 
Last edited:
  • #19
State clearly what part of the proof you are trying to do. Say, I'm trying to prove that if ___ then ___. Fill in the blanks.
 
  • #20
Dick said:
State clearly what part of the proof you are trying to do. Say, I'm trying to prove that if ___ then ___. Fill in the blanks.

<Tv, v> is >0, v is not a zero vector, and T is a positive operator, then T is invertible.
 
  • #21
Ok then since T is self-adjoint it has an orthonormal basis of eigenvectors with real eigenvalues, right? You know the eigenvalues are positive since <Tv,v> >0, also right? So as you said, in that basis the matrix of T is diagonal with positive entries (the eigenvalues) on the diagonal, still ok? Is T invertible? What does T^(-1) look like?
 
  • #22
Dick said:
Ok then since T is self-adjoint it has an orthonormal basis of eigenvectors with real eigenvalues, right? You know the eigenvalues are positive since <Tv,v> >0, also right? So as you said, in that basis the matrix of T is diagonal with positive entries (the eigenvalues) on the diagonal, still ok? Is T invertible? What does T^(-1) look like?

ok I took one post you have given me for granted. 0 is not positive. I've been
moping around about a zero eigenvalue when in reality, T (as a positive operator)
is not even supposed to have 0 in the first place. It is supposed to have eigenvalues
with postive square roots. Square roots will not even be considered in the proof, btw.

Now if you want a matrix, T^(-1) is the inverse of an nxn matrix. Multiplying it by the matrix T gives the identity matrix.
Where 1's fill the diagonal, and all other entries not on the diagonal are 0's.
Or in non-matrix form, I can demonstrate T^(-1) this way...
Let v=e1+...+en dimV=n and ek is an eigenvector in an orthonormal basis in V.
Now Tv=c1e1+..+cnvn with each ck>0. Now when we apply T^(-1), we invert
or undo what T has done to v. Now T^(-1)v=T^(-1)c1e1+...+T^(-1)cnen
=e1+...+en.
 
Last edited:
  • #23
Ok, that's basically it. T^(-1) is also a diagonal matrix whose entries are the inverses of the diagonal entries of T. But notice your definition of a positive operator is only that <Tv,v> >=0. It's only when you add the condition <Tv,v> >0 that T is necessarily invertible. A general positive operator CAN have zero eigenvalues.
 
  • #24
Dick said:
Ok, that's basically it. T^(-1) is also a diagonal matrix whose entries are the inverses of the diagonal entries of T. But notice your definition of a positive operator is only that <Tv,v> >=0. It's only when you add the condition <Tv,v> >0 that T is necessarily invertible. A general positive operator CAN have zero eigenvalues.

Ok just so we're clear...lets say that our diagonal matrix has one zero
eigenvalue on its diagonal. This IS positive. But the problem is that we cannot
guarantee that the v T is mapping is not going to map to 0. So let's say that
v=ek. If v=ek and 0 is the eigenvalue at entry k,k on our diagonal matrix,
then <Tv, v>=<0*v, v>=0. You told me yourself...I should prove that
T is invertible for every nonzero v where <Tv, v> is >0.
 
  • #25
An operator is either invertible or it's not invertible. I did not tell you to prove it's invertible only for v such that <Tv,v> >0. The premise is that <Tv,v> >0 for ALL nonzero v. Period. End of discussion. Again.
 
  • #26
Oh, wait. Are you trying to prove the converse? If <Tv,v>=0 for SOME nonzero v then T is not invertible? It would really help if you would say what you are trying to do.
 
  • #27
Dick said:
Oh, wait. Are you trying to prove the converse? If <Tv,v>=0 for SOME nonzero v then T is not invertible? It would really help if you would say what you are trying to do.

The idea that if <Tv, v>=0 for some nonzero v then T is not invertibel was what I had in mind.

Suppose <Tv, v> is >0 for all v in V. Now let the matrix of T be a diagonal matrix containing
eigenvalues for each eigenvector within our orthonormal basis B (in V)={e1...en}. Now let v=ek
(1<=k<=n) where ek is any member of B. If <Tv, v> is >0, which is true for all vectors in V, then we can conclude that
all eigenvectors have positive eigenvalues >0. Therefore T has a null space of {0}.

hope I wasn't being too vague here. The point I'm trying to make here is that if we
can choose any ek where Tek=/=0 for some nonzero ek, then T is invertible. I'm also half asleep,
so if I made a mistake, I'll fix it later when I get out of this mummified state.
 
Last edited:
  • #28
Use your orthonormal basis of eigenvectors. This time prove that if <Tv,v>=0 for some vector, NOT necessarily an eigenvector, that at least one eigenvalue must be zero. That would show T is not invertible.
 
  • #29
Dick said:
Use your orthonormal basis of eigenvectors. This time prove that if <Tv,v>=0 for some vector, NOT necessarily an eigenvector, that at least one eigenvalue must be zero. That would show T is not invertible.

Ok I will use the same basis and T. Now if <Tv, v>=0 for some v, then
we have <Tv, v>=<Te1, v>+...+<Tek, v>=<0e1, v>+...+<0ek, v>=0 where 1<=k.
If this is true, then at least one eigenvalue would be 0, thus T is not invertible.
 
  • #30
evilpostingmong said:
Ok I will use the same basis and T. Now if <Tv, v>=0 for some v, then
we have <Tv, v>=<Te1, v>+...+<Tek, v>=<0e1, v>+...+<0ek, v>=0 where 1<=k.
If this is true, then at least one eigenvalue would be 0, thus T is not invertible.

That 'proof' doesn't make ANY SENSE. 1<=k?? <0e1,v>?? This is not that hard. TRY to be coherent for once.
 
  • #31
Dick said:
That 'proof' doesn't make ANY SENSE. 1<=k?? <0e1,v>?? This is not that hard. TRY to be coherent for once.


Ok let <Ts, s>=0. Let s be a linear combination of k linearly independent vectors in our orthonormal basis. If <Ts, s>=0, and each eigenvalue of the eigenvectors of B is an entry on T's diagonal (such that Tek=ck,kek) >=0, then <Ts, s>=0 if Ts=0*s where 0 is an eigenvalue. Therefore there are k 0 eigenvalues on T's diagonal so s is a nonzero vector in kerT.
 
  • #32
evilpostingmong said:
Ok let <Ts, s>=0. Let s be a linear combination of k linearly independent vectors in our orthonormal basis. If <Ts, s>=0, and each eigenvalue of the eigenvectors of B is an entry on T's diagonal (such that Tek=ck,kek) >=0, then <Ts, s>=0 if Ts=0*s where 0 is an eigenvalue. Therefore there are k 0 eigenvalues on T's diagonal so s is a nonzero vector in kerT.

? Sure, <Ts,s>=0 if Ts=0*s. That's no surprise. But what is that supposed to prove??
 
  • #33
Dick said:
? Sure, <Ts,s>=0 if Ts=0*s. That's no surprise. But what is that supposed to prove??

Ok let <Ts, s>=0. Let s be a linear combination of k linearly independent vectors in our orthonormal basis. If <Ts, s>=0, and each eigenvalue of the eigenvectors of B is an entry on T's diagonal (such that Tek=ck,kek) >=0, then <Ts, s>=0 if each eigenvector adding to s corresponds to an eigenvalue of 0. Therefore there are k 0 eigenvalues on T's diagonal that correspond to the k eigenvectors that add to s so s is a nonzero vector in kerT.
Since there is an vector (s) that maps to zero, T is not invertible.
 
  • #34
evilpostingmong said:
Ok let <Ts, s>=0. Let s be a linear combination of k linearly independent vectors in our orthonormal basis. If <Ts, s>=0, and each eigenvalue of the eigenvectors of B is an entry on T's diagonal (such that Tek=ck,kek) >=0, then <Ts, s>=0 if each eigenvector adding to s corresponds to an eigenvalue of 0. Therefore there are k 0 eigenvalues on T's diagonal that correspond to the k eigenvectors that add to s so s is a nonzero vector in kerT.
Since there is an vector (s) that maps to zero, T is not invertible.

SHOW that all of the eigenvectors that combine to make s have eigenvalue zero. Don't just say so. That's STILL not a proof. Let s=c1*e1+...+cn*en. Set T(ei)=ai*ei so ai are the eigenvalues, (ai>=0 since T is positive). Take the e's to orthonormal. Now compute <Ts,s> in terms of the ci's and ai's. Use that to back up your claim there must be a zero eigenvector.
 
  • #35
Dick said:
SHOW that all of the eigenvectors that combine to make s have eigenvalue zero. Don't just say so. That's STILL not a proof. Let s=c1*e1+...+cn*en. Set T(ei)=ai*ei so ai are the eigenvalues, (ai>=0 since T is positive). Take the e's to orthonormal. Now compute <Ts,s> in terms of the ci's and ai's. Use that to back up your claim there must be a zero eigenvector.
Ok let <Ts, s>=0. Let s be a linear combination of k linearly independent vectors in our orthonormal basis. If <Ts, s>=0, and each eigenvalue of the eigenvectors of B is an entry on T's diagonal (such that Tckek=ak,kckek) >=0, then <Ts, s>=0 if each eigenvector adding to s corresponds to an eigenvalue of 0. Therefore setting s=c1e1+...+ckek, we have
T(s)=a1,1c1e1+...+ak,kckek. Now <T(s), s>=<a1,1c1e1+...+ak,kckek,s>. Since
ai,i=0, <T(s), s>=<0*e1+...+0*ek, s>=0 .Therefore there are k 0 eigenvalues on T's diagonal that correspond to the k eigenvectors that add to s so s is a nonzero vector in kerT.
Since there is an vector (s) that maps to zero, T is not invertible.
 

Similar threads

  • Calculus and Beyond Homework Help
Replies
2
Views
332
  • Calculus and Beyond Homework Help
Replies
24
Views
792
  • Calculus and Beyond Homework Help
Replies
5
Views
520
  • Calculus and Beyond Homework Help
Replies
1
Views
455
  • Calculus and Beyond Homework Help
Replies
7
Views
409
  • Calculus and Beyond Homework Help
Replies
8
Views
617
  • Calculus and Beyond Homework Help
Replies
5
Views
304
  • Calculus and Beyond Homework Help
Replies
4
Views
3K
  • Calculus and Beyond Homework Help
Replies
4
Views
2K
  • Calculus and Beyond Homework Help
Replies
7
Views
1K
Back
Top