Another positive operator proof

evilpostingmong
Messages
338
Reaction score
0

Homework Statement


Suppose that T is a positive operator on V. Prove that T is invertible
if and only if <Tv,v > is >0 for every v ∈ V \ {0}.


Homework Equations





The Attempt at a Solution


If T is invertible, then TT-1=I.Now let v=v1+...+vn and let Tv=a1v1+...+anvn. Now <Tv, v>=<a1v1, v>+...+<anvn, v>. Applying T-1 we get <T-1(a1v1), v>+...+<T-1(anvn),v> =<v1, v>+...+<vn, v>=<Iv, v>=<v, v>. Since v\notin{0},<v, v> is >0. And since T is invertible,
<Tv, v>=<v, v> if T=I, or <Tv, v> > <v, v> if T=/=I. Therefore <Tv, v> is >=<v, v>.
And since <v, v> is >0, <Tv, v> is >0.
 
Physics news on Phys.org
I don't know how to handle the other direction. All we know is that <Tv, v> is >0.
T doesn't have to be invertible. It can be a projection and still be >0.
 
You know more than <Tv,v> >=0 for a positive operator. You also know T is self-adjoint. You stated the definition in a previous thread. If it weren't the theorem wouldn't be true. Take T to be rotation by 90 degrees in R^2. Then <Tv,v>=0. But T is invertible. But it's not self-adjoint.
 
Dick said:
You know more than <Tv,v> >=0 for a positive operator. You also know T is self-adjoint. You stated the definition in a previous thread. If it weren't the theorem wouldn't be true. Take T to be rotation by 90 degrees in R^2. Then <Tv,v>=0. But T is invertible. But it's not self-adjoint.
Alright for the first part the only thing that I will change is the definition of <Tv, v>.
Since we know T is self adjoint, and postive, T has a positive square root. So
<S^2v, v>=<v, S^2v>.
I have no idea how to do the other direction.
 
Last edited:
evilpostingmong said:
Alright for the first part the only thing that I will change is the definition of <Tv, v>.
Since we know T is self adjoint, and postive, T has a positive square root. So
<S^2v, v>=<v, S^2v>.
I have no idea how to do the other direction.

The reason you know T has a square root is because you know there is an orthogonal basis of the vector space all of which are eigenvectors of T. What can you say about the values of the eigenvalues?
 
Dick said:
The reason you know T has a square root is because you know there is an orthogonal basis of the vector space all of which are eigenvectors of T. What can you say about the values of the eigenvalues?

All of the eigenvalues should be >=0 so it follows that each eigenvalue has a positive square root. I take it that this is
important because if an eigenvalue is negative, then the square root would be imaginary. So
we'd have 0-i with its conjugate 0+i. This wouldn't allow self adjointness.
 
Last edited:
evilpostingmong said:
All of the eigenvalues should be >=0. If we have even one eigenvalue <0,
then <Tv, v> would be negative. So all of T's eigenvalues must be >=0 so it follows
that each eigenvalue has a positive square root.

Zero isn't positive. Who cares about a "square root"?? I thought you were trying to prove something about when T is invertible.
 
Dick said:
Zero isn't positive. Who cares about a "square root"?? I thought you were trying to prove something about when T is invertible.

what should I do now?
 
evilpostingmong said:
what should I do now?

Ack! Think about it! If T has a zero eigenvalue, is it invertible??
 
  • #10
Dick said:
Ack! Think about it! If T has a zero eigenvector, is it invertible??

no. If Tv=a1v1+...+0*vn, you're not going to get v back . If Tv=a1v1+...+anvn (no eigenvalue is 0) then you can
get v back by dividing each eigenvalue by itself. Sorry for being lazy, its a hot day. I got to pull my weight here.
 
Last edited:
  • #11
evilpostingmong said:
no. If Tv=a1v1+...+0*vn, you're not going to get v back . If Tv=a1v1+...+anvn (no eigenvalue is 0) then you can
get v back by dividing each eigenvalue by itself. Sorry for being lazy, its a hot day. I got to pull my weight here.

That's pretty incomprehensible.
 
  • #12
Dick said:
That's pretty incomprehensible.

Its like this: Say the domain has a basis {(1 0 0), (0 1 0), (0 0 1)}.
Now say T maps from here to a range with basis {(1 0 0) (0 1 0)}. So the
matrix is 3x3 with a 0 row at the bottom. This implies that there is a 0 eigenvalue.
Now let's say that the vector we map is v=a(1 1 1) (a=/=0). T(1 0 0 )+T(0 1 0 ) would
never equate to a(1 1 1) no matter what T is since a(1 1 1) is not within the basis for the range.
This is what I mean by "not being able to get v back". This applies to all vectors with three
nonzero entries.
 
  • #13
evilpostingmong said:
Its like this: Say the domain has a basis {(1 0 0), (0 1 0), (0 0 1)}.
Now say T maps from here to a range with basis {(1 0 0) (0 1 0)}. So the
matrix is 3x3 with a 0 row at the bottom. This implies that there is a 0 eigenvalue.
Now let's say that the vector we map is v=a(1 1 1) (a=/=0). T(1 0 0 )+T(0 1 0 ) would
never equate to a(1 1 1) no matter what T is since a(1 1 1) is not within the basis for the range.
This is what I mean by "not being able to get v back". This applies to all vectors with three
nonzero entries.

You know if Tv=0*v for v not zero, then v is ker(T). If ker(T) is not {0} then T is not invertible. You KNOW that. You don't have to try to reprove that fact with awkward bad examples everytime you need it.
 
  • #14
The problem is that when given <Tv, v> is >0, T doesn't have to be invertible.
T can be an orthogonal projection with all eigenvalues >0 (except for say, one eigenvalue being 0)
and still be >0. I mean we know that <Tv, v> is >0 but we don't know whether or not it is invertible.
But it is still possible for <Tv, v> to be >0 when T is invertible.
 
  • #15
evilpostingmong said:
The problem is that when given <Tv, v> is >0, T doesn't have to be invertible.
T can be an orthogonal projection with all eigenvalues >0 (except for say, one eigenvalue being 0)
and still be >0. I mean we know that <Tv, v> is >0 but we don't know whether or not it is invertible.
But it is still possible for <Tv, v> to be >0 when T is invertible.

That's a pretty poor example of a mapping where <Tv,v> >0. <Tv,v>=0 for the vectors that are projected out. What's your point?
 
  • #16
Dick said:
That's a pretty poor example of a mapping where <Tv,v> >0. <Tv,v>=0 for the vectors that are projected out. What's your point?

Lets say v=(1 1 1). Now Tv=(1 1 0 ) with (0 0 1) being in nullT.
Now this is an orthogonal projection since the inner product between (1 1 0) and
(0 0 1) is 0. But the inner product between (1 1 0 ) and (1 1 1) or <Tv, v> is
greater than 0. Orthogonal projections are not invertible. But the
inner product <Tv, v> is >0. The problem is using <Tv, v> >0 to prove that
T is invertible.
 
  • #17
evilpostingmong said:
Lets say v=(1 1 1). Now Tv=(1 1 0 ) with (0 0 1) being in nullT.
Now this is an orthogonal projection since the inner product between (1 1 0) and
(0 0 1) is 0. But the inner product between (1 1 0 ) and (1 1 1) or <Tv, v> is
greater than 0. Orthogonal projections are not invertible. But the
inner product <Tv, v> is >0. The problem is using <Tv, v> >0 to prove that
T is invertible.

Look. The problem says prove T is invertible if <Tv,v> >0 for EVERY v not equal to zero (and T self-adjoint). A projection with a nontrivial kernel DOES NOT have that property. Period. End of discussion. Read the problem statement again, several times.
 
  • #18
Ok let <Tv, v> be >0. Now let v=e1+...+en where ek is an eigenvector of T from
an orthonormal basis on V. Since T is positive and self adjoint, let T be a diagonal matrix with one positive eigenvalue for each ek. Should I assume this? wait nevermind, the eigenvalues should be nonnegative.
 
Last edited:
  • #19
State clearly what part of the proof you are trying to do. Say, I'm trying to prove that if ___ then ___. Fill in the blanks.
 
  • #20
Dick said:
State clearly what part of the proof you are trying to do. Say, I'm trying to prove that if ___ then ___. Fill in the blanks.

<Tv, v> is >0, v is not a zero vector, and T is a positive operator, then T is invertible.
 
  • #21
Ok then since T is self-adjoint it has an orthonormal basis of eigenvectors with real eigenvalues, right? You know the eigenvalues are positive since <Tv,v> >0, also right? So as you said, in that basis the matrix of T is diagonal with positive entries (the eigenvalues) on the diagonal, still ok? Is T invertible? What does T^(-1) look like?
 
  • #22
Dick said:
Ok then since T is self-adjoint it has an orthonormal basis of eigenvectors with real eigenvalues, right? You know the eigenvalues are positive since <Tv,v> >0, also right? So as you said, in that basis the matrix of T is diagonal with positive entries (the eigenvalues) on the diagonal, still ok? Is T invertible? What does T^(-1) look like?

ok I took one post you have given me for granted. 0 is not positive. I've been
moping around about a zero eigenvalue when in reality, T (as a positive operator)
is not even supposed to have 0 in the first place. It is supposed to have eigenvalues
with postive square roots. Square roots will not even be considered in the proof, btw.

Now if you want a matrix, T^(-1) is the inverse of an nxn matrix. Multiplying it by the matrix T gives the identity matrix.
Where 1's fill the diagonal, and all other entries not on the diagonal are 0's.
Or in non-matrix form, I can demonstrate T^(-1) this way...
Let v=e1+...+en dimV=n and ek is an eigenvector in an orthonormal basis in V.
Now Tv=c1e1+..+cnvn with each ck>0. Now when we apply T^(-1), we invert
or undo what T has done to v. Now T^(-1)v=T^(-1)c1e1+...+T^(-1)cnen
=e1+...+en.
 
Last edited:
  • #23
Ok, that's basically it. T^(-1) is also a diagonal matrix whose entries are the inverses of the diagonal entries of T. But notice your definition of a positive operator is only that <Tv,v> >=0. It's only when you add the condition <Tv,v> >0 that T is necessarily invertible. A general positive operator CAN have zero eigenvalues.
 
  • #24
Dick said:
Ok, that's basically it. T^(-1) is also a diagonal matrix whose entries are the inverses of the diagonal entries of T. But notice your definition of a positive operator is only that <Tv,v> >=0. It's only when you add the condition <Tv,v> >0 that T is necessarily invertible. A general positive operator CAN have zero eigenvalues.

Ok just so we're clear...lets say that our diagonal matrix has one zero
eigenvalue on its diagonal. This IS positive. But the problem is that we cannot
guarantee that the v T is mapping is not going to map to 0. So let's say that
v=ek. If v=ek and 0 is the eigenvalue at entry k,k on our diagonal matrix,
then <Tv, v>=<0*v, v>=0. You told me yourself...I should prove that
T is invertible for every nonzero v where <Tv, v> is >0.
 
  • #25
An operator is either invertible or it's not invertible. I did not tell you to prove it's invertible only for v such that <Tv,v> >0. The premise is that <Tv,v> >0 for ALL nonzero v. Period. End of discussion. Again.
 
  • #26
Oh, wait. Are you trying to prove the converse? If <Tv,v>=0 for SOME nonzero v then T is not invertible? It would really help if you would say what you are trying to do.
 
  • #27
Dick said:
Oh, wait. Are you trying to prove the converse? If <Tv,v>=0 for SOME nonzero v then T is not invertible? It would really help if you would say what you are trying to do.

The idea that if <Tv, v>=0 for some nonzero v then T is not invertibel was what I had in mind.

Suppose <Tv, v> is >0 for all v in V. Now let the matrix of T be a diagonal matrix containing
eigenvalues for each eigenvector within our orthonormal basis B (in V)={e1...en}. Now let v=ek
(1<=k<=n) where ek is any member of B. If <Tv, v> is >0, which is true for all vectors in V, then we can conclude that
all eigenvectors have positive eigenvalues >0. Therefore T has a null space of {0}.

hope I wasn't being too vague here. The point I'm trying to make here is that if we
can choose any ek where Tek=/=0 for some nonzero ek, then T is invertible. I'm also half asleep,
so if I made a mistake, I'll fix it later when I get out of this mummified state.
 
Last edited:
  • #28
Use your orthonormal basis of eigenvectors. This time prove that if <Tv,v>=0 for some vector, NOT necessarily an eigenvector, that at least one eigenvalue must be zero. That would show T is not invertible.
 
  • #29
Dick said:
Use your orthonormal basis of eigenvectors. This time prove that if <Tv,v>=0 for some vector, NOT necessarily an eigenvector, that at least one eigenvalue must be zero. That would show T is not invertible.

Ok I will use the same basis and T. Now if <Tv, v>=0 for some v, then
we have <Tv, v>=<Te1, v>+...+<Tek, v>=<0e1, v>+...+<0ek, v>=0 where 1<=k.
If this is true, then at least one eigenvalue would be 0, thus T is not invertible.
 
  • #30
evilpostingmong said:
Ok I will use the same basis and T. Now if <Tv, v>=0 for some v, then
we have <Tv, v>=<Te1, v>+...+<Tek, v>=<0e1, v>+...+<0ek, v>=0 where 1<=k.
If this is true, then at least one eigenvalue would be 0, thus T is not invertible.

That 'proof' doesn't make ANY SENSE. 1<=k?? <0e1,v>?? This is not that hard. TRY to be coherent for once.
 
  • #31
Dick said:
That 'proof' doesn't make ANY SENSE. 1<=k?? <0e1,v>?? This is not that hard. TRY to be coherent for once.


Ok let <Ts, s>=0. Let s be a linear combination of k linearly independent vectors in our orthonormal basis. If <Ts, s>=0, and each eigenvalue of the eigenvectors of B is an entry on T's diagonal (such that Tek=ck,kek) >=0, then <Ts, s>=0 if Ts=0*s where 0 is an eigenvalue. Therefore there are k 0 eigenvalues on T's diagonal so s is a nonzero vector in kerT.
 
  • #32
evilpostingmong said:
Ok let <Ts, s>=0. Let s be a linear combination of k linearly independent vectors in our orthonormal basis. If <Ts, s>=0, and each eigenvalue of the eigenvectors of B is an entry on T's diagonal (such that Tek=ck,kek) >=0, then <Ts, s>=0 if Ts=0*s where 0 is an eigenvalue. Therefore there are k 0 eigenvalues on T's diagonal so s is a nonzero vector in kerT.

? Sure, <Ts,s>=0 if Ts=0*s. That's no surprise. But what is that supposed to prove??
 
  • #33
Dick said:
? Sure, <Ts,s>=0 if Ts=0*s. That's no surprise. But what is that supposed to prove??

Ok let <Ts, s>=0. Let s be a linear combination of k linearly independent vectors in our orthonormal basis. If <Ts, s>=0, and each eigenvalue of the eigenvectors of B is an entry on T's diagonal (such that Tek=ck,kek) >=0, then <Ts, s>=0 if each eigenvector adding to s corresponds to an eigenvalue of 0. Therefore there are k 0 eigenvalues on T's diagonal that correspond to the k eigenvectors that add to s so s is a nonzero vector in kerT.
Since there is an vector (s) that maps to zero, T is not invertible.
 
  • #34
evilpostingmong said:
Ok let <Ts, s>=0. Let s be a linear combination of k linearly independent vectors in our orthonormal basis. If <Ts, s>=0, and each eigenvalue of the eigenvectors of B is an entry on T's diagonal (such that Tek=ck,kek) >=0, then <Ts, s>=0 if each eigenvector adding to s corresponds to an eigenvalue of 0. Therefore there are k 0 eigenvalues on T's diagonal that correspond to the k eigenvectors that add to s so s is a nonzero vector in kerT.
Since there is an vector (s) that maps to zero, T is not invertible.

SHOW that all of the eigenvectors that combine to make s have eigenvalue zero. Don't just say so. That's STILL not a proof. Let s=c1*e1+...+cn*en. Set T(ei)=ai*ei so ai are the eigenvalues, (ai>=0 since T is positive). Take the e's to orthonormal. Now compute <Ts,s> in terms of the ci's and ai's. Use that to back up your claim there must be a zero eigenvector.
 
  • #35
Dick said:
SHOW that all of the eigenvectors that combine to make s have eigenvalue zero. Don't just say so. That's STILL not a proof. Let s=c1*e1+...+cn*en. Set T(ei)=ai*ei so ai are the eigenvalues, (ai>=0 since T is positive). Take the e's to orthonormal. Now compute <Ts,s> in terms of the ci's and ai's. Use that to back up your claim there must be a zero eigenvector.
Ok let <Ts, s>=0. Let s be a linear combination of k linearly independent vectors in our orthonormal basis. If <Ts, s>=0, and each eigenvalue of the eigenvectors of B is an entry on T's diagonal (such that Tckek=ak,kckek) >=0, then <Ts, s>=0 if each eigenvector adding to s corresponds to an eigenvalue of 0. Therefore setting s=c1e1+...+ckek, we have
T(s)=a1,1c1e1+...+ak,kckek. Now <T(s), s>=<a1,1c1e1+...+ak,kckek,s>. Since
ai,i=0, <T(s), s>=<0*e1+...+0*ek, s>=0 .Therefore there are k 0 eigenvalues on T's diagonal that correspond to the k eigenvectors that add to s so s is a nonzero vector in kerT.
Since there is an vector (s) that maps to zero, T is not invertible.
 
  • #36
evilpostingmong said:
Ok let <Ts, s>=0. Let s be a linear combination of k linearly independent vectors in our orthonormal basis. If <Ts, s>=0, and each eigenvalue of the eigenvectors of B is an entry on T's diagonal (such that Tckek=ak,kckek) >=0, then <Ts, s>=0 if each eigenvector adding to s corresponds to an eigenvalue of 0. Therefore setting s=c1e1+...+ckek, we have
T(s)=a1,1c1e1+...+ak,kckek. Now <T(s), s>=<a1,1c1e1+...+ak,kckek,s>. Since
ai,i=0, <T(s), s>=<0*e1+...+0*ek, s>=0 .Therefore there are k 0 eigenvalues on T's diagonal that correspond to the k eigenvectors that add to s so s is a nonzero vector in kerT.
Since there is an vector (s) that maps to zero, T is not invertible.

Garbage. You are just repeating yourself and throwing even more junk in. Forget the matrix of T. Reread my last post and tell me what the value of <Ts,s> is completely in terms of the ai's and ci's. I don't want to hear about anything else. If you can't do that, you can't prove it.
 
  • #37
Dick said:
Garbage. You are just repeating yourself and throwing even more junk in. Forget the matrix of T. Reread my last post and tell me what the value of <Ts,s> is completely in terms of the ai's and ci's. I don't want to hear about anything else. If you can't do that, you can't prove it.

Ok s=c1e1+...+cnen. Now Ts=a1c1e1+...+ancnen. Now <Ts, s>=<a1c1e1+...+ancnen,s>
=<a1c1e1, s>+...+<ancnen, s>. each ai=0 now <0*c1e1+...+0*cnen, s>
=<0*c1e1, s>+...+<0*cnen>=0*c1^2+...+0*cn^2=0.
 
  • #38
Better. <Ts,s>=a1*c1^2+...an*cn^2. (You should probably write |ci|^2 in case the ci's are complex, i.e. |ci|^2=ci(ci*) where the * is complex conjugation.) But now you don't just put the ai equal to zero. That not proofy. You USE that to explain WHY at least one of the ai's must be zero if <Ts,s>=0. WHY? Saying WHY is what makes it a proof.
 
  • #39
Dick said:
Better. <Ts,s>=a1*c1^2+...an*cn^2. (You should probably write |ci|^2 in case the ci's are complex, i.e. |ci|^2=ci(ci*) where the * is complex conjugation.) But now you don't just put the ai equal to zero. That not proofy. You USE that to explain WHY at least one of the ai's must be zero if <Ts,s>=0. WHY? Saying WHY is what makes it a proof.

oh right. read you wrong when you said compute the inner product, and nothing else.
sorry about that.
Ok <Ts, s>=a1lc1l^2+...+anlcnl^2. Let 1<=n<=dim V and let lcil>0. If <Ts, s>=0,
then a1lc1l^2+...+anlcnl^2=0 if all eigenvalues ai of the eigenvectors
adding to s are 0. Therefore there must be n 0 eigenvalues of T making T
non invertible.

please note that assuming n is=dim V would force me to conclude that
a1lc1l^2+..+anlcnl^2=0 if the number of 0's=dimV.
 
  • #40
evilpostingmong said:
oh right. read you wrong when you said compute the inner product, and nothing else.
sorry about that.
Ok <Ts, s>=a1lc1l^2+...+anlcnl^2. Let 1<=n<=dim V and let lcil>0. If <Ts, s>=0,
then a1lc1l^2+...+anlcnl^2=0 if all eigenvalues ai of the eigenvectors
adding to s are 0. Therefore there must be n 0 eigenvalues of T making T
non invertible.

please note that assuming n is=dim V would force me to conclude that
a1lc1l^2+..+anlcnl^2=0 if the number of 0's=dimV.

Why are you trying to 'prove' stuff? Is it really necessary? That's maybe the most obscure and unclear version of the proof anyone could possibly give. Yes, there are n zero eigenvectors if all of the ai are zero. WHY are ANY of the ai=0? Why? Why? Why? That's the PROOF part.
 
Last edited:
  • #41
Dick said:
Why are you trying to 'prove' stuff? Is it really necessary? That's maybe the most obscure and unclear version of the proof anyone could possibly give. Yes, there are n zero eigenvectors if all of the ai are zero. WHY are ANY of the ai=0? Why? Why? Why? That's the PROOF part.

Oh whoops. Here let me make this more clear.
Ok suppose <Ts, s>=0. Now we have a1lc1l^2+...+anlcnl^2. Let 1<=n<=dimV. Since we are dealing with nonzero eigenvectors, assume lcil^2 is >0. Knowing that T is a positive operator, all eigenvalues (ai) are >=0. Now, considering that ai is >=0,
we know that if ai is not 0, then ai is >0 which would cause our inner product to not =0.
Therefore a1lc1l^2+...+anlcnl^2=0 when each ai=0.

Is this better? I assumed that <Ts, s>=0 and computed the inner product.
I used the fact that T is a positive operator (ai>=0) and all eigenvectors are nonzero (so lcil^2>0) so the only way that the inner product can possibly equate to 0 (and not be greater than 0) is for each ai to be 0.
 
  • #42
That is hugely better than before - where you omitted ALL of the reasons. Now you've got most of them. Except you got one wrong. The reason why |ci|>0 has nothing to do with the 'eigenvectors being nonzero' (eigenvectors are ALWAYS nonzero). You can say there must be a ci which is nonzero because we are picking s to be nonzero. In the obscure "Let 1<=n<=dimV." section you seem to have meant to say pick n to be the number of nonzero ci's and rearrange the basis so that s=c1*e1+...+cn*en with all of the ci nonzero. But you left out the description of what n was supposed to be. Completely. Things like that make it almost impossible to read your 'proofs'.
 
  • #43
Dick said:
That is hugely better than before - where you omitted ALL of the reasons. Now you've got most of them. Except you got one wrong. The reason why |ci|>0 has nothing to do with the 'eigenvectors being nonzero' (eigenvectors are ALWAYS nonzero). You can say there must be a ci which is nonzero because we are picking s to be nonzero. In the obscure "Let 1<=n<=dimV." section you seem to have meant to say pick n to be the number of nonzero ci's and rearrange the basis so that s=c1*e1+...+cn*en with all of the ci nonzero. But you left out the description of what n was supposed to be. Completely. Things like that make it almost impossible to read your 'proofs'.

Alright let me fix it.
Ok suppose <Ts, s>=0. Now we have a1lc1l^2+...+anlcnl^2. Since we are dealing with a nonzero s, assume there must be at least one ci=/=0. Knowing that T is a positive operator, all eigenvalues (ai) are >=0. Now, considering that ai is >=0, we know that if ai is not 0, then ai is >0 which would cause our inner product to not =0. Therefore a1lc1l^2+...+anlcnl^2=0 when each ai=0.

I took out the 1<=n<=dimV part because it only complicates the proof and
I took out the unnecessary notion that each eigenvector is not 0.
 
  • #44
You made a wise choice to drop the 1<=n<=dimV part. Yes, it just makes things more complicated. You only need one c to be nonzero. But doing it that way you don't know that ALL of the ai=0. You only need to show ONE ai=0. You might also want to go back to the big picture. You were going to prove if <Ts,s>=0 for any nonzero s, then T is not invertible, as I recall.
 
Last edited:
  • #45
Dick said:
You made a wise choice drop the 1<=n<=dimV part. Yes, it just makes things more complicated. You only need one c to be nonzero. But doing it that way you don't know that ALL of the ai=0. You only need to show ONE ai=0. You might also want to go back to the big picture. You were going to prove if <Ts,s>=0 for any nonzero s, then T is not invertible, as I recall.

Ok suppose <Ts, s>=0. Now we have a1lc1l^2+...+anlcnl^2. Since we are dealing with a nonzero s, assume there must be at least one lcil^2=/=0. Knowing that T is a positive operator, all eigenvalues (ai) are >=0. Now, considering that ai is >=0, we know that if ai is not 0, then ai is >0 which would cause our inner product to not =0 if ai=/=0 is multiplied by any nonzero lcil. Therefore a1lc1l^2+...+anlcnl^2=0 if and only if ai=0 for all lcil^2>0.
Therefore <Ts, s>=0 if and only if T has zero eigenvalues that correpsond to nonzero vectors (in this case, lcjlej where lcjl=/=0) in the sum of s. Thus T is not invertible
when <Ts, s>=0 for nonzero s and positive T.
 
Last edited:
  • #46
evilpostingmong said:
Ok suppose <Ts, s>=0. Now we have a1lc1l^2+...+anlcnl^2. Since we are dealing with a nonzero s, assume there must be at least one lcil^2=/=0. Knowing that T is a positive operator, all eigenvalues (ai) are >=0. Now, considering that ai is >=0, we know that if ai is not 0, then ai is >0 which would cause our inner product to not =0 if ai=/=0 is multiplied by any nonzero lcil. Therefore a1lc1l^2+...+anlcnl^2=0 if and only if ai=0 for all lcil^2>0.
Therefore <Ts, s>=0 if and only if T has zero eigenvalues that correpsond to nonzero vectors (in this case, lcjlej where lcjl=/=0) in the sum of s. Thus T is not invertible
when <Ts, s>=0 for nonzero s and positive T.

Fine.
 
Back
Top