Understanding Projectors in Quantum Mechanics: A Mathematical Approach

  • Thread starter Thread starter Sammywu
  • Start date Start date
  • Tags Tags
    Projector Qm
Sammywu
Messages
273
Reaction score
0
I saw some discussions about projectors in some threads. Also, the projector is used in this book to define pure state but did not provide what is a projector.

http://www.math.sunysb.edu/~leontak/book.pdf

Thru some math. check by assuming TR AP_\psi = (A (\psi) , \psi ) ( In this book, that is the expectation vale for AP_\psi ) , I got the answer by
P_\psi ( e_n ) = \sum_i c_i \overline{c_n } e_i
if
\psi= \sum_i c_i e_i
for an orthonormal basis { e_n }.

That sounds to be a good one for it.

Also, if
\psi_1 = c_1 \psi + c_2 \bot \psi
then
P_\psi ( \Psi_1) = c_1 \psi , where
( \bot\psi, . \psi ) = 0 .

Is this right?
 
Last edited by a moderator:
Physics news on Phys.org
... did not provide what is a projector.
Definition: P is a "projector" if (and only if):

(i) Pt = P ,

and

(ii) P has eigenvalues 0 and/or 1 .

It then follows that P is a "projector" if, and only if, Pt = P and P2 = P.

--------------------

Thru some math. check by assuming TR AP_\psi = (A (\psi) , \psi ) ( In this book, that is the expectation vale for AP_\psi ) , I got the answer by
P_\psi ( e_n ) = \sum_i c_i \overline{c_n } e_i
if
\psi= \sum_i c_i e_i
for an orthonormal basis { e_n }.

That sounds to be a good one for it.
I'm not quite sure what you are asking here. But all of the equations are correct. However, the "Trace" equation is said to be "the expectation value for A when the system is in the state ψ", not "the expectation value for APψ".

--------------------

Also, if
\psi_1 = c_1 \psi + c_2 \bot \psi
then
P_\psi ( \Psi_1) = c_1 \psi , where
( \bot\psi, . \psi ) = 0 .

Is this right?
Yes.
 
Last edited:
Eye,

Thanks.

The error you pointed out was my typo. I will think about the link between the EQ. I derived and your defintion.
 
Eye,

Shall it be \overline{P^t} = P?

Thanks
 
By the way, this condition is also needed in my EQ:

\sum_i c_i \overline{c_i } = 1

Otherwise, P_\psi ( \psi ) = \sum_i c_i \overline{c_i } \psi

Or, in general, the EQ shall be:

P_\psi ( e_n ) = ( 1 / \sum_j c_j \overline{c_j } ) \sum_i c_i \overline{c_n } e_i
 
Notation conversion between inner product and bra/ket for C and D belonging to the Hilbert Space.

( C , D ) = < D | C > = \sum_j c_j \overline{d_j }

if C = \sum_j c_j e_j and D = \sum_j d_j e_j
 
Eye,

Shall it be \overline{P^t} = P?

Thanks
If by "t" you don't mean the "Hermitian transpose" (called the "adjoint"), then it shall.

------------------------

By the way, this condition is also needed in my EQ:

\sum_i c_i \overline{c_i } = 1
Oh, I was assuming that. It means that the state ψ is "normalized" (i.e. to unity).


Otherwise, P_\psi ( \psi ) = \sum_i c_i \overline{c_i } \psi
Yes. BUT, then, for nontrivial Pψ (i.e. Pψ ≠ 0), it is not a projector (because the eigenvalue 1 has become Σi|ci|2).


Or, in general, the EQ shall be:

P_\psi ( e_n ) = ( 1 / \sum_j c_j \overline{c_j } ) \sum_i c_i \overline{c_n } e_i
The factor in front should still be a 1 (not 1/Σj|cj|2). (BUT remember: this more general case is no longer a projector!)

------------------------

Notation conversion between inner product and bra/ket for C and D belonging to the Hilbert Space.

( C , D ) = < D | C > = \sum_j c_j \overline{d_j }

if C = \sum_j c_j e_j and D = \sum_j d_j e_j
Looks fine.
 
Last edited:
Does a projector have an inverse. I never thought of it before, but now I'm wondering ... (it doesn't seem to have an inverse)
 
idempotent

P2 = P

implies

P-1P2 = P-1P

P = 1 ;

the only one with an inverse is the identity

(alternatively, you can say that on account of a 0 eigenvalue det = 0, and therefore, there is no inverse, except when P = 1)

(alternatively, you can say that when P ≠ 1 (and visualizing it geometrically) the mapping is MANY-to-ONE, and therefore has no inverse)
 
Last edited:
  • #10
I think I derived a few things that seem important to me:

1). For \psi_1 and \psi_2 \in H,

In order for
( P_\psi_1 + P_\psi_2 ) = P_{\psi_1+\psi_2}
then
\psi_1 \bot \psi_2 i.e. ( \psi_1 , \psi_2 ) = 0

2). From that,

\sum_n P_{e_n} = P_{\sum_n e_n}
 
  • #11
I verified that:

For \psi and \psi_n \in H,

where \psi_n are eigenbasis of A,

this holds true:
< A | P_\psi > = \sum_n c_n \overline{c_n} E_n

where
\psi = \sum_n c_n \psi_n.
 
  • #12
A question is now whether P_{e_n} can serve as a basis ( or generator ) of GL(H) or A(H)?
 
  • #13
If
\sum_n a_n P_{e_n} = P_{\sum_n a_n e_n}
is true, then P_{e_n} can not serve as a basis, because all its linear combinations are also projectors then.

But 2 P_{e_n} does not seem to be the projector for
2 e_n; so in general, there seems to be possibility.
 
  • #14
Sammywu said:
I think I derived a few things that seem important to me:

1). For \psi_1 and \psi_2 \in H,

In order for
(a) ( P_\psi_1 + P_\psi_2 ) = P_{\psi_1+\psi_2}
then
(b) \psi_1 \bot \psi_2 i.e. ( \psi_1 , \psi_2 ) = 0
Do you mean "(a) implies (b)", "(b) implies (a)", or both ?


2). From that,

\sum_n P_{e_n} = P_{\sum_n e_n}
Yup.

--------------------------

I verified that:

For \psi and \psi_n \in H,

where \psi_n are eigenbasis of A,

this holds true:
< A | P_\psi > = \sum_n c_n \overline{c_n} E_n

where
\psi = \sum_n c_n \psi_n.
You didn't mention that the En are the eigenvalues of A. (... I assume you assumed <ψ|ψ> = 1.)

--------------------------

... All of this will become much, much simpler once you start using Dirac notation.

--------------------------------------------
 
  • #15
Eye,

Thanks.

I have a question for the lemma 1.1 at page 40 in that book.

I don't think there is anythings that says the eigenbasis of a self-adjoint operator can always span the entire Hilbert space.

If that's the case, I can write a

\psi = \sum_i a_i \Psi_i + b \psi_\bot

where

\Psi_\bot \bot \Psi_n, for all n.

( M (\psi) , \psi )
might not equal to
( \sum_i a_i P_\psi_n ( \psi ) , \psi ).
 
  • #16
Eye,

What I found is below is the sufficient condition for this Lemma to be true:

( M (\psi) , \psi_\bot ) = 0

Any Way, it there is a countable basis, this seems to be true.

I aslo found I can easily set up a self-adjoint operator that only has limited number of eigen values and map the rest of basis to zero.
 
  • #17
turin said:
Does a projector have an inverse. I never thought of it before, but now I'm wondering ... (it doesn't seem to have an inverse)

No it doesn't, except for the trivial projector on the whole space. The "feel it in the bones" argument is that when you project something, you don't know what the orthogonal component was (the one that has been filtered out).
The mathematical argument is of course that a non-trivial projector has 0 eigenvalues (of the orthogonal directions that have been filtered out!).

In fact, this property is at the heart of the "measurement problem" in QM: a projection (such as happens in von Neuman's view) can never be the result of a unitary evolution.

cheers,
Patrick.
 
  • #18
Sammywu said:
A question is now whether P_{e_n} can serve as a basis ( or generator ) of GL(H) or A(H)?
GL(n,C) is a group with respect to matrix multiplication. It is not "closed" under matrix addition, and therefore, does not have the vector space structure you are assuming in your question. On the other hand, the full set of n x n matrices with entries from C, M(n,C), is "closed" under matrix addition (as well as, multiplication, of course). So, you might want to pose your question with respect to M(n,C).

In that case, for n > 1, the answer is "no". M(n,C) has dimension n2, whereas, the Pi will span a subspace with dimension no larger than n (of course, the Pi are in fact linearly independent, so they will span a subspace of dimension equal to n).
 
Last edited:
  • #19
Sammywu said:
If

[1] \sum_n a_n P_{e_n} = P_{\sum_n a_n e_n} ...
The object on the left-hand-side is not (in general) a projector. That object has eigenvalues an , whereas a projector has eigenvalues
0 and / or 1.

--------------------------

At this juncture, it is instructive to consider ordinary 3-D Euclidean space. Pick any unit vector n. Then, the projector corresponding to this unit vector is given by

[2] Pn(v) = (vn) n , for any vector v .

The description of [2] is "the projection of v along n". Do you remember what this means geometrically? (see figure).

--------

NOTE:

In Dirac notation, [2] becomes

Pn|v> = <n|v> |n> = |n> <n|v> = (|n><n|) |v> , for any |v> .

We therefore write:

Pn = |n><n| .

If you think of each ket as a column matrix and the corresponding bra as its Hermitian transpose (a row matrix) then this notation can be taken "literally".

--------------------------

I suggest you reserve the symbol "P", in the above type of context, only for a "projector" proper. Also, I suggest you invoke the rule that the "subscript" of P is always a "unit" vector. These two prescriptions would then disqualify the "legitimacy" of the right-hand-side of [1] on both counts.

At the same time, if you want to consider generalizations for which a relation like [1] holds, then use a symbol "R" (or whatever else) instead of "P". The generalization of [2] which gives a relation like [1] is then simply:

[2'] Ru(v) = [ v ∙ (u/|u|) ] u , for any vector v .

But what is the motivation for reserving a special "symbol" for this operation? Its description is "project the vector v into the direction of u and then multiply by the magnitude of u". The meaningful aspects of this operation are much better expressed by writing the corresponding operator as |u|P(u/|u|).

--------------------------

Now, let's go back the definition I gave in post #2.


Definition: P is a "projector" if (and only if):

(i) Pt = P ,

and

(ii) P has eigenvalues 0 and/or 1 .

It then follows that P is a "projector" if, and only if, Pt = P and P2 = P.
I am now strongly suggesting that we, instead, use the following as our "official" definition:

*************************
* 1a) Given any unit vector e, definite the "projector onto e" by:
*
* Pe(v) = (v,e) e , for any vector v .
*
* Such a projector is said to be "1-dimensional".
*
* 1b) An operator P is said to be a "projector" if (and only if)
* it can be written as a sum of 1-dimensional projectors
* which project onto mutually orthogonal unit vectors.
*
*************************

This definition [1a) and 1b) taken together] is equivalent to the original one I gave. But I think it makes the meaning of "projector" much clearer.

------------------------------------------------------

All that I have said above should clarify matters like:


Sammywu said:
But 2 P_{e_n} does not seem to be the projector for
2 e_n ...
-----------
 

Attachments

  • projector.jpg
    projector.jpg
    6.9 KB · Views: 702
Last edited:
  • #20
Sammywu said:
I don't think there is anythings that says the eigenbasis of a self-adjoint operator can always span the entire Hilbert space.
[Note: you wrote "eigenbasis", when you meant "eigenvectors".]

The answer to your query is given in that book by Theorem 1.1 (called "The Spectral Theorem"), on p. 38. In simple language, it is saying that the answer is: "Yes, a self-adjoint operator will always have eigenvectors (or "generalized" eigenvectors) spanning the entire Hilbert space."

HOWEVER, you must NOTE that the definition of "self-adjoint" (in the case of an infinite-dimensional Hilbert space) is nontrivial (... in that book, the appropriate definition is given at the top of p. 36, in 1.1.1 Notations).

------------------------

For the sake of giving you (at least) something, here is some basic information. Let A be a linear operator acting in the Hilbert space.

Definition: A is "symmetric" iff <g|Af> = <Ag|f> for all f,g Є Domain(A).

Definition: A is "self-adjoint" iff: (i) A is symmetric; (ii) the "adjoint" At exists; and (iii) Domain(A) = Domain(At).

Lemma: At, the "adjoint" of A, exists iff Domain(A) is dense in the Hilbert space.

All that is missing in the above is a definition of "adjoint" (which I have omitted for the sake of brevity and simplicity). That definition would then give us a specification of Domain(At) and thereby complete the definition of "self-adjoint".

------------------------

Now, you might ask: How can it be that there is a linear operator A with a domain "smaller" than the whole Hilbert space, yet, at the same time, A has eigenvectors which span the entire space?

Well, first of all, this can only happen in an infinite-dimensional Hilbert space. Suppose A has eigenfunctions φn(x) with corresponding eigenvalues an. So,

[1] Aφn(x) = anφn(x) .

Since the φn(x) span the entire space, an arbitrary element ψ(x) of the space can be written as

[2] ψ(x) = Σn cnφn(x) .

The right-hand-side of [2] is an infinite sum, and, therefore, involves a limit. While every finite subsum is necessarily in the domain of A, it is possible that in the limit of the infinite sum, the resulting vector is no longer in that domain. ... As you can see, this sort of phenomenon can only occur when the Hilbert space is infinite-dimensional.

But what do we get if we, nevertheless, attempt to "apply" A to ψ(x) by linearity and use [1]? Let's try it:

Aψ(x) = Σn cnn(x)

= Σn ancnφn(x) [3] .

As you may have guessed, when ψ(x) is not in Domain(A), the following occurs in [3]: while every finite subsum is necessarily an element of the Hilbert space, in the limit of the infinite sum the "result" is no longer in the Hilbert space.

--------
 
  • #21
Eye,

I wanted to print your response in oder to read it clearer. Unfortunatelly, I got some troubles with my printer. Hopefully I can print it tomorrow.

One of the issue I saw in your last response is that you seem to extend your defintion of self-adjoint to "non-trivial" one, my guess is you want to extend that to the one with eigenvectors that can span the entire Hilbert space.

Is it necessary?

Because if you do so, then P_{e_n} is not self-adjoint then. Note it has only e_n as its eigen vector.

I did overlook that theorem, I shall look closer into it.

Any way, I found these facts that help me to see it clearer:
Below, I was assuming a relaxed self-adjoint definition.

1).
\sum_n a_n P_{e_n}
can be simply represented as the diagonal matrix as diag( a_1, a_2, a_3 ... ).

So, it actually spans the group ( or ring ) formed by the matrix with only diagonal elements with values.

2). In general, they are not self-adjoint unless all a_i are real.

3). If all a_i are real, for any a_i not zero, e_i is one of its eigenvectors.

4). In particular, if
\sum_n a_n = 1 and a_n &gt;= 0
then it's a "state". If more than one a_n &gt; 0 , then it's a mixed state.

5). This also tells me that a set of { P_{e_n} } is defintely not enough to span ( or generate ) all states, even though the { e_n } can span the Hilbert Space.
 
  • #22
Eye,

By the way, are you Leon?

In the page 37, the mapping defining projection valued measure is not necessary 1-1 & onto, right?
 
  • #23
Sammywu said:
Because if you do so, then P_{e_n} is not self-adjoint then. Note it has only e_n as its eigen vector.
Every vector orthogonal to en is an eigenvector of Pn with eigenvalue zero. Clearly, Pn has a complete set of eigenvectors.


1).
\sum_n a_n P_{e_n}
can be simply represented as the diagonal matrix as diag( a_1, a_2, a_3 ... ).

So, it actually spans the group ( or ring ) formed by the matrix with only diagonal elements with values.

2). In general, they are not self-adjoint unless all a_i are real.

3). If all a_i are real, for any a_i not zero, e_i is one of its eigenvectors.

4). In particular, if
\sum_n a_n = 1 and a_n &gt;= 0
then it's a "state". If more than one a_n &gt; 0 , then it's a mixed state.

5). This also tells me that a set of { P_{e_n} } is defintely not enough to span ( or generate ) all states, even though the { e_n } can span the Hilbert Space.
1) The only problematic part here is the expression "spans the group (or ring)". True, the said objects form a group with respect to +, and monoid with respect to ∙ , and that defines a ring. But when you talk about spanning, you are thinking of the of the group aspect (with the + operation) over the field C. This gives a vector space ... and if you want to acknowledge the monoid aspect with respect to ∙ , then it's called an (associative) algebra. In short, the simplest correct thing to say is:

So, it actually spans the vector space formed by the matrices ... over C.

(... if you have a "thing" for such terminologies try mathworld)

2) True.

3) This is the same error as the one identified at the beginning of this post ... eigenvalues can be 0 (it's the eigenvectors! which can't).

4) True.

5) True, the Pn are not enough. But what is the "this" that tells you?
 
  • #24
Eye,

Got you. My error was assuming zero can not be eigenvalue.
 
  • #25
Eye,

About (5), what it means to me is that I will have no gurantee that I can generate a "position" eigenstate by linear combination of "energy"'s eigenstates even though I can generate its eigenfunction by "energy"'s eigenfunction. Or, a mixed state by "energy"'s eigenfunction won't equal to any combination of "postion"s' pure states.

I followed through the rest to page 45. Operators P and Q should be unbounded. What does it exactly mean by that?

Thanks
 
  • #26
Sammywu said:
By the way, are you Leon?
No. (... I think he is too busy writing difficult eBooks to be in the forum.)

-----------------------

In the page 37, the mapping defining projection valued measure is not necessary 1-1 & onto, right?
It appears to me that such a mapping can never be one-to-one. It is certainly never onto.

-----------------------

Operators P and Q should be unbounded. What does it exactly mean by that?
Definition: A linear operator L is bounded iff there exists a constant C such that

(Lψ, Lψ) < C (ψ, ψ) , for all ψ Є H .

Can you see that there is no such constant C for Q or for P?
 
  • #27
Eye,

I take definition of inner product in L2(R, dq) as
( \varphi, \psi ) = \int \overline{\psi} \varphi dq
.

(q \psi , q \psi ) = \int \overline{q \psi } q \psi dq
= \int \overline{q} \overline{\psi} q \psi dq
= \int q^2 \overline{\psi } \psi dq

because q is real.

If it's bounded, then

\int ( q^2 - C ) \overline{\psi } \psi dq &lt; 0
.

Now all I need to do is proving this is not possible for a C exits.

Am I on right track?

Thanks
 
  • #28
I think that this track will lead you to a proof that Q is an unbounded operator.
 
  • #29
Eye,

Now to keep it easy, I pick a
\psi = ( | q - q_0 | e^{-(q-q_0)^2 })^{1/2}
.

It's easy to prove that
( \psi , \psi ) = 1
.

Considering in the interval
[ q_0-1 , q_0 +1 ] ,
\int_{q_0-1}^{q_0+1) q^2 * | \psi |^2 &gt;= ( q_0 - 1)^2 * 2 * 1/2 * 1/e
.

I can prove ( q \psi , q \psi ) &gt; ( q_0 - 1 )^2 / e
.

This is definitely unbounded, because all I need to do is moving q_0 to one end of R line, this value will grow with it without bound.

Does this look fine?

Thanks
 
Last edited:
  • #30
OK. If that shows how Q is an unbounded operator.

What I did not show is how the commutator of two bounded operators can not be I. At this point, all I find is
( (AB-BA) \psi , (AB-BA) \psi ) = ( AB \psi, AB \psi) + (BA \psi, BA \psi )
.

So if AB-BA = I, then 1/2 &lt;= C_1 * C_2
. Not further.

Any way, I guess originally I was bothered is because the definition of < Q | M > = TR QM and only bounded operators' trace was defined.

So, here the problem is actually QM need to be bounded even if Q is not bounded.
 
  • #31
So, I just try to see whether QM is always of trace class.

Let M = \sum_n a_n P_{\psi_n}
where
\sum_n a_n = 1

TR QM = \sum_n ( Q \sum_i a_i P_{\psi_i} \psi_n , \psi_n )
= \sum_n ( Q a_n \psi_n , \psi_n )
= \sum_n a_n ( Q \psi_n, \psi_n )

Assuming
b_n = | ( Q \psi_n, \psi_n ) |
, whether QM is of trace class will depends on whether
\sum_n a_n b_n converges.

So, if I can find a set of \psi_n such that b_n = 2/a_n , then I have a QM not of trace class.
 
  • #32
Sammywu said:
Now to keep it easy, I pick a
\psi = ( | q - q_0 | e^{-(q-q_0)^2 })^{1/2}
This is not easy.


( \psi , \psi ) = 1
True.


Considering in the interval
[ q_0-1 , q_0 +1 ] ,
q^2 * | \psi |^2 &gt;= ( q_0 - 1)^2 * 2 * 1/2 * 1/e
How so? For q = qo the LHS is 0.

------
... The idea behind the proof will work. But for technical reasons the proof has failed. Why not really keep it easy and choose ψ like below?

ψ(q)

= 1 , q Є (qo, qo + 1)
= 0 , otherwise

Then (Qψ, Qψ) > qo2 .

------------------------------------

Define: ║φ║ = √(φ, φ)

Then: ║φ + ξ║ ≤ ║φ║ + ║ξ║

.... :surprise:

------------------------------------

So, I just try to see whether QM is always of trace class.
In the most general case, QM may or may not be.

In the following, you carry out the inquiry well:

So, I just try to see whether QM is always of trace class.

Let M = \sum_n a_n P_{\psi_n}
where
\sum_n a_n = 1

TR QM = \sum_n ( Q \sum_i a_i P_{\psi_i} \psi_n , \psi_n )
= \sum_n ( Q a_n \psi_n , \psi_n )
= \sum_n a_n ( Q \psi_n, \psi_n )

Assuming
b_n = | ( Q \psi_n, \psi_n ) |
, whether QM is of trace class will depends on whether
\sum_n a_n b_n converges.
Remember also that M is a state. So we also have

0≤an≤1 as well as ∑nan=1 .

But still, this is not enough. In general the series can still diverge.

This tells us that our current definition for a state M is still too general. While all physical states do satisfy the definition of M, not all M's are physical states.
 
Last edited:
  • #33
Eye,

Note I just added an integral sign in front.

Considering in the interval
[ q_0-1 , q_0 +1 ] ,
\int_{q_0-1}^{q_0+1} q^2 * | \psi |^2 &gt;= ( q_0 - 1)^2 * 2 * 1/2 * 1/e

Any way, I agree your proof is much easier and quicker than mine.

I need to read your response more thoroughly.

It seems you used the extended triangular inequality; that was what I was thinking yesterday but can't get it proved and working.

I did branch out to think the issue of mixed state:

&lt; Q , P_{\sum_n a_n \psi_n} &gt; = TR \ QP_{\sum_n a_n \psi_n} =
\sum_n ( QP_{\sum_i a_i \psi_i} \psi_n , \psi_n ) = \sum_n ( Q \sum_j a_j \overline{a_n} \psi_j , \psi_n ) =
\sum_n \overline{a_n} ( \sum_j a_j Q \psi_j , \psi_n ) = \sum_n \overline{a_n} \sum_j a_j ( Q \psi_j , \psi_n )

&lt;Q | M &gt; = &lt; Q , \sum_n a_n P_{\psi_n } &gt; = TR \ Q \sum_n a_n P_{\psi_n} =
\sum_n a_n ( Q P_{\psi_n} \psi_n , \psi_n ) = \sum_n a_n ( Q \psi_n , \psi_n )

This shows that even though P_{\sum_n a_n \psi_n} \ and \ \sum_n a_n P_{\psi_n }, but they do have the same expectation value.

Now, for P_\sum_n a_n \psi_n to be a state, (\sum_n a_n \psi_n , \sum_n a_n \psi_n ) = \sum a_n \overline\a_n = \sum a_n^2 \ needs \ to \ be \ one . This condition is different from the condition for M to be a state in that TR \ M = \sum_n (\sum_i a_i P_\psi_i \psi_n , \psi_n ) = \sum a_n needs \ to \ be \ one .

I thought I might be able to show some Q wll have the same expectation values for the mixed state and the pure state. Apparently this is a little tedious than I thought.
 
Last edited:
  • #34
Eye,

The previous one is a little messy. I was trying to see whether the '//' will give me a new line.

So, I redo it here.

I did branch out to think the issue of mixed state:

&lt; Q , P_{\sum_n a_n \psi_n} &gt; = TR \ QP_{\sum_n a_n \psi_n} =
\sum_n ( QP_{\sum_i a_i \psi_i} \psi_n , \psi_n ) =
\sum_n ( Q \sum_j a_j \overline{a_n} \psi_j , \psi_n ) =
\sum_n \overline{a_n} ( \sum_j a_j Q \psi_j , \psi_n ) =
\sum_n \overline{a_n} \sum_j a_j ( Q \psi_j , \psi_n )

&lt;Q | M &gt; = \sum_n a_n ( Q \psi_n , \psi_n )

as we alreday showed in earlier one.


Now, for P_\sum_n a_n \psi_n to be a state,
(\sum_n a_n \psi_n , \sum_n a_n \psi_n ) =
\sum a_n \overline{a_n} = \sum a_n^2 [\tex] <br /> needs to be one.<br /> This condition is different from the condition for M to be a state in that <br /> TR \ M = \sum_n (\sum_i a_i P_\psi_i \psi_n , \psi_n ) =<br /> \sum a_n <br /> needs to be one.<br /> <br /> I thought I might be able to show some Qs wll have the same expectation values for the mixed state and the pure state. Apparently this is a little tedious than I thought.<br /> <br /> Any way, why do you say I is unbounded in a infinite dimensional space?<br /> <br /> ( I \psi , I \psi ) = ( \psi , \psi ) &amp;lt; 2 ( \psi , \psi )<br /> <br /> whether the space is infinite or finite dimensional. <br /> <br /> Thanks
 
Last edited:
  • #35
Eye,

Changing my tactic, I gathered some facts:

Let
\psi = \sum b_i \psi_i
where
b_i \overline{b_i} = a_i
and
\sum b_i \overline{b_i} = 1
.

Now, this can be changed to:

&lt; Q , P_{\sum_n b_n \psi_n} &gt; = TR \ QP_{\sum_n b_n \psi_n} =
\sum_n ( QP_{\sum_i b_i \psi_i} \psi_n , \psi_n ) =
\sum_n ( Q \sum_j b_j \overline{b_n} \psi_j , \psi_n ) =
\sum_n \overline{b_n} ( \sum_j b_j Q \psi_j , \psi_n ) =
\sum_n \overline{b_n} \sum_j b_j ( Q \psi_j , \psi_n )

Compare that to:

&lt;Q | M &gt; = \sum_n a_n ( Q \psi_n , \psi_n )

If I set
Q = \sum c_n P_{\psi_n}
, then
&lt; Q | P_\psi &gt; = \sum_n \overline{b_n} b_n c_n

and

&lt; Q | M &gt; = \sum_n a_n c_n

They are the same.

So for any Qs as \sum c_n P_{\psi_n}, we will have the same expectation value for the mixed state and the pure state.
 
  • #36
Eye,

You know sometimes this latex thing is strange.

My question is:

Any way, why do you say I is unbounded in a infinite dimensional space?

I mean,

( I \psi , I \psi ) = ( \psi , \psi ) &lt; 2 ( \psi , \psi )

whether the space is infinite or finite dimensional.

Thanks
 
  • #37
Any way, back to the issue between the "mixed" state and the "pure" states, further questions shall be:

1). Can we relax the conditions for the "pure" states and the observables A ( I wrote Q earlier , since this is a general observable not the "position", I think A is better)?

2). Even though they have the same expection value, shall they have different distribution such like < A | M > might have a multi-nodal distribution and < A | P_\psi > has a central normal distribution?
 
  • #38
Sammywu said:
Any way, why do you say I is unbounded in a infinite dimensional space?

I mean,

( I \psi , I \psi ) = ( \psi , \psi ) &lt; 2 ( \psi , \psi )
I forgot to mention the following:

Metatheorem: A necessary condition for I to be unbounded (in the ∞-dimensional case) is serious confusion. :surprise:


I'll have to fix that.
 
  • #39
Eye,

Now, let me take an example.

Assuming two free particles,

\psi_1 = \int \int \delta ( k - k_1 , w - w_1 ) e^{i(k(x-x_1)+w(t-t_1))} dk dw

and

\psi_2 = \int \int \delta ( k - k_2 , w - w_2 ) e^{i(k(x-x_2)+w(t-t_2))} dk dw

represent their wave functions; their states shall be
P_{\psi_1} \\ and \\ P_{\psi_2}.

In combination, this mixed state M = 1/2 ( P_{\psi_1} + P_{\psi_2} ) shall represent their combined state, or the ensemble.

Correct?
 
  • #40
I see you are still using that notation where P is not a projector and its subscript isn't a unit vector:

P_{\sum_n a_n \psi_n} .

According to that notation, is it not true that the preceding expression equals

\sum_n a_n P_{\psi_n } ?

So, why don't you just use the second one?

-------------------------------

The step below has an error:

\sum_n ( QP_{\sum_i a_i \psi_i} \psi_n , \psi_n ) = \sum_n ( Q \sum_j a_j \overline{a_n} \psi_j , \psi_n )
I think this error results from what I just pointed out above, that you are treating this P as a projector when it is not! I also explained this same point in post #19:

[1] \sum_n a_n P_{e_n} = P_{\sum_n a_n e_n}

The object on the left-hand-side is not (in general) a projector. That object has eigenvalues an , whereas a projector has eigenvalues 0 and / or 1.

---------------

[2] Pn(v) = (vn) n , for any vector v

----------------

I suggest you reserve the symbol "P", in the above type of context, only for a "projector" proper. Also, I suggest you invoke the rule that the "subscript" of P is always a "unit" vector. These two prescriptions would then disqualify the "legitimacy" of the right-hand-side of [1] on both counts.

At the same time, if you want to consider generalizations for which a relation like [1] holds, then use a symbol "R" (or whatever else) instead of "P". The generalization of [2] which gives a relation like [1] is then simply:

[2'] Ru(v) = [ v ∙ (u/|u|) ] u , for any vector v .

But what is the motivation for reserving a special "symbol" for this operation? Its description is "project the vector v into the direction of u and then multiply by the magnitude of u". The meaningful aspects of this operation are much better expressed by writing the corresponding operator as |u|P(u/|u|).
Do you follow what I am saying?

------

All of this means there is no second condition ∑nan2=1. (In which case, your post #35 no longer stands (I think :confused: ).)
 
Last edited:
  • #41
Eye,

Yes. I followed what you say. The correct defintion of projector here has a necessary condition in that the norm of the ket needs to be one or in other words it's a normal vector.

I correct that in the next post. \sum b_i \overline{b_i} = 1 gurantees that \psi is a normal vector.

Thanks
 
  • #42
Eye,

You are right.

I shall have used the second one to differentiate it from P.

OK.
 
  • #43
Sammywu said:
Any way, back to the issue between the "mixed" state and the "pure" states, further questions shall be:

1). Can we relax the conditions for the "pure" states and the observables A ( I wrote Q earlier , since this is a general observable not the "position", I think A is better)?

2). Even though they have the same expection value, shall they have different distribution such like < A | M > might have a multi-nodal distribution and < A | P_\psi > has a central normal distribution?
In 1): relax how? ... or is that what 2) is explaining?

I don't understand 2).
 
  • #44
Eye,

1) Relax.. means I could even find simpler conditions such as maybe all vectors in H_M can be treated and found a role in this issue. Or maybe A as any function of P_{\psi_n} can satisfy similar property.

My guess to the question is probably NO. The conditions I found pretty describe the situations of indistinguishable "mixed" and "pure" states.

2). What I am saying is even though their expectation values are the same. The statistics will show us two different distributions. So now it's time to investigate their probability decomposition of AM and AP_\psi.

Thoughts came out from me sometimes just are not described immediately in correct math. language but a pragmatic thought to begin with. So, I wrote distribution of < A | M >, things like that.

Multi-nodal distribution is my words for the distribution that you see multiple distingushiable points like two clear different frequency of light pulse will leave two lines in the light spectrum.

Central normal distribution is my words to emphasize that a normal probability distribution has a central node with an inverse exponetial shape of distribution.

I am sorry I do not remember what are correct math. terms for them.

Regards
 
  • #45
The example I show is two particles shot rightward at different times and at time T we shine rays of lights from top downward. Now will we predict from this math. the light detector at the bottom will show two spots instead one spot.
 
  • #46
A pure state is represented by a unit vector |φ>, or equivalently, by a density operator ρ = |φ><φ|. In that case, ρ2 = ρ.

Suppose we are unsure whether or not the state is |φ1> or |φ2>, but know enough to say that the state is |φi> with probability pi. Then the corresponding density operator is given by

ρ = p11><φ1| + p22><φ2| .

In that case ρ2 ≠ ρ, and the state is said to be mixed. Note that the two states |φ1> and |φ2> need not be orthogonal (however, if they are parallel (i.e. differ only by a phase factor), then we don't have mixed case but rather a pure case).

This is the real place to begin. And Dirac's notation is the superior way of representing these objects.

--------------------

Tell me, Sammy, have you learned the postulates of Quantum Mechanics in terms of simple, basic statements like the one below?

The probability of obtaining the result an in a measurement of the nondegenerate observable A on the system in the state |φ> is given by

P(an) = |<ψn|φ>|2 ,

where |ψn> is the eigenvector corresponding to the eigenvalue an.
 
Last edited:
  • #47
Eye,

I have seen that postulate but not fully convinced in that and tried to understand that and see whether my interpretation is correct.

So, I try to translate that here:

P(a_n) = ( \int \overline{\psi_n} \varphi )^2

Then
a_n P(a_n) = a_n ( \int \overline{\psi_n} \varphi )^2 =
( \int \overline{\psi_n} (a_n)^{1/2} \varphi )^2

Is there something wrong with my translation?

It seems that this is more likely.
P(a_n) = \int \overline{\psi_n} \varphi
 
  • #48
I thought :

&lt; A | M &gt; = &lt; \varphi | A | \varphi &gt; =
\int \overline{\varphi} (A) \varphi

In order to lead us there.

If
P(a_n) = \int \overline{\psi_n} \varphi

Then
\sum_n a_n P(a_n) =
\int \sum_n ( a_n \overline{\psi_n} ) \varphi

This might lead us there because a_n \psi_n = A \psi_n.
 
  • #49
Eye,

If I use Dirac notation,

&lt; \varphi | A | \varphi &gt; =

&lt; \varphi | A \sum_n | \psi_n &gt; &lt; \psi_n | \varphi&gt; =

\sum_n &lt; \varphi | A | \psi_n &gt; c_n =
\sum_n \overline{c_n} c_n &lt; \psi_n | A | \psi_n &gt; =
\sum_n \overline{c_n} c_n a_n =
\sum_n | &lt; \psi_n | \varphi &gt; |^2 a_n


Assuming &lt; \psi_n | \varphi &gt; = c_n
, so
| \varphi &gt; = \sum_n c_n | \psi_n &gt;
.
That do agree with your formula.

Does that look good to you?

But I do have some troubles to show that in an integral of Hermitian OP.
 
Last edited:
  • #50
Eye,

Back to what you said, the mixed state does not seem to apply to my proposed case.

I was confused about how we can use this mixed state. Now I have better idea but i still want to think further about it.

Any way, I thought the translation between the three notation is :

( u , v ) = &lt; v | u &gt; = \int \overline{v} u

Is this correct?
 

Similar threads

Back
Top