Understanding Projectors in Quantum Mechanics: A Mathematical Approach

  • Thread starter Thread starter Sammywu
  • Start date Start date
  • Tags Tags
    Projector Qm
Click For Summary
The discussion centers on the mathematical understanding of projectors in quantum mechanics, particularly their definition and properties. A projector is defined as an operator P that satisfies P^t = P and P^2 = P, with eigenvalues of 0 and/or 1. The conversation includes derivations related to projectors acting on states in Hilbert space, emphasizing the importance of normalization and orthogonality in these operations. Additionally, the participants explore whether projectors can serve as a basis for certain vector spaces and clarify that non-trivial projectors do not have inverses due to their filtering nature. Overall, the thread highlights the mathematical intricacies of projectors and their role in quantum mechanics.
  • #121
[tex] < A > = [/tex]
[tex] \sum_{S{p(a)}} a P(a) \ + \ \int_{S{c(a)}} a P(a) da = [/tex]

( By E 3.4 and E 3.5 , we derive below: )

[tex] \sum_n \sum_{k=1}^{g(n)} a < a_n^k | \rho | a_n^k > \ + \ \int_{S{c(a)}} a < a | \rho | a > da = [/tex]

( Using the definition of eigenkets as,
[tex] A a_n^k = a a_n^k [/tex]
[tex] A | a > = a | a > [/tex]
we can derive below: )

[tex] \sum_n \sum_{k=1}^{g(n)} < a_n^k | \rho A | a_n^k > \ + \ \int_{S{c(a)}} < a | \rho A | a > da [/tex]
 
Physics news on Phys.org
  • #122
E 3.1) a)

Even though my knowledge of "tensor product" of vector spaces might not be enough, I did give a thought on how to handle this exercise.

1). How do we know the spectrum of Q is degenerate?

Unless, we have another set of eigenbasis and self-adjoint operator, said E, and we found we have problems reconciling them with Q. For example,
[tex] < E | \psi > = \int < E | q > < q | \psi > dq [/tex]
does not hold steady or meet the expectation for a state [tex] | \psi > [/tex] .

2). We will then believe that we need to expand the Hilbert space by adding another one in. Why don't we choose "direct sum" instead of "tensor product"?

If we we choose "direct sum", the basis will be extended as
[tex] v_1, v_2, ... v_n, w_1, ... w_n [/tex]
; we are basically just adding more eigenvalues and eigenvectors in doing so.

We need something like
[tex] < x | \psi > = \int < xy | \psi > dy [/tex]
or
[tex] P (xy) dxdy = | < xy | \psi > |^2 dxdy [/tex]
.

So we need to define a "product" of vector spaces that fit our needs.

I may continue later.

================================================

For E 3.1) a), I have "roughly" read a chapter about the unbounded and bounded solutions of square well potential problem.

I found the difference that led to the discrete bounded solution is basically that the solutions outside of the well are in the forms of [tex] A e^{-p_1 x/h} [/tex] at the LHS and [tex] D e^{p_1 x/h} [/tex] at the RHS of the well, because the condition of E < 0 took out the imaginary part.

It's very interesting.
 
  • #123
E 3.1) B)

3) In looking what we already have,

[tex] | \psi_x > = | x> < x | \psi_x > [/tex]
,
[tex] P(y) dy = < \psi | y> < y | \psi > dy [/tex]
,
[tex] | \psi_y > = | y> <y | \psi_y > [/tex]
,
[tex] P(x) dy = < \psi | x> < x | \psi > dx [/tex]
and our believe that
[tex] | \psi > = | x, y > < x,y | \psi > [/tex]
and
[tex] P(x,y) dx dy = < \psi | x,y> <x,y | \psi > dx dy [/tex]
, our quickiest approach will be:
[tex] < x, y | \psi > = < y | \psi_y > < x | \psi_x > [/tex]
and
[tex] < \psi | x,y > = \overline{< y | \psi_y > < x | \psi_x >} = [/tex]
[tex] < \psi_y | y > < \psi_x | x > [/tex]
.

This turned out it will satisfy both needs in representation of ket and probability.

Not only that, we also see
[tex] P(x,y) dx dy = P(x) p(y) dx dy [/tex]
.

This satisfies the general probability rule:
[tex] P ( x, y ) = P(x)P(y|x) [/tex]
; the issue here seems to be that
P(y) = P(y|x), which means x and y need to be independent to each other.

So, if we do find the eigenvalues of x and y independent to each other, we will try to expand our Hilbert space in the above way.

Our current ket space of | x,y,z,s > is of course an example.

4). If x and y are not independent, what will we get?

For example, I can easily produce a degenerate continuous spetrum by using function f(x) = |x|.
 
  • #124
E 3.1) B).

5). In 3), we actually have used the two property of "tensor product":

[tex] ( | x_1 > + | x_2 > ) \otimes | y > = | x_1 , y > + | x_2 , y > [/tex]
[tex] | x > \otimes ( | y_1 > + | y_2> ) = | x , y_1 > + | x , y_2 > [/tex]

The last property
[tex] \alpha | x, y > = | \alpha x> \otimes | y> = | x> \otimes | \alpha y > [/tex]
will be used in
[tex] < A > = < x,y | A | x,y > [/tex]
.
 
  • #125
Checked with a book, I found my answer to E3.1) B) will basically lead to "looking for the set of maximum commutable observables".

That is not what Eye asked any way. Eye's question already assumed there is a Hilbert space and we found multiple "generalized kets" for one eigenvalues of a continuous spectrum.

Any way, to look for that answer, I found I need to clarify there are only true discrete eigenkets in our assumption.

So I went sideway to find a proof why "separability" ensures "countable discrete eigenkets".

The proof is actually quite straightforward, after a few day's rumination:
1). A Hilbert space is a vector space with an inner product.
2). An inner product can define the "norm". The "norm" can define the distance between two vectors.
3). With the "distance", we can define the open sets and so the topology.
4). Definition of "separability" says if we have an open covering for the Hilbet space, then we have a "counatble" open covering as its subset.
5). For any two vectors belong to an orthonormal basis, their distance will be [tex] 2^{1/2} [/tex] .
6). If we define an open coverings as all open balls with a radius 0.25, we will have a countable subset that covers the Hilbert space.
7). Because any two orthonormal vectors has a distance greater than 0.25, so no two orthonormal vectors could be in one open ball; this implys the number of elements in the countable open coverings is greater than the number of all orthonormal vectors, so the basis is definitely countable.

Does this look good?
==================================================

Any way, if we assume the Hilbert space only has countable orthonormal basis, then we know there is no true eigenvector for a continuous spectrum because any continuous real interval has uncountable numbers in it.

Of course, I think there must be a proof that shows the space of square-integrable functions has a countable orthonormal basis and is separable. So, if the "space" has an isomorphich structure with the "functoion" space, it has only countable orthonormal basises and is separable.
 
  • #126
Response to posts #108, 109

The limiting procedure which you allude to in post #108 is not at all "well-defined". You have given no "structure" which tells how A changes with each incremental step in the supposed "limiting procedure".
____

P.S. I had to change this post and cut out all of the quotes because the LaTeX was doing really strange things!

P.P.S. I also wanted to respond to your posts #114-124, but LaTex is malfunctioning. I may not have another opportunity to post more until the beginning of next week.
 
Last edited:
  • #127
Response to posts #108, 109 (again!)

This is what I originally wanted to post.

Here is what you said in post #109 regarding post #108:
My previous post regarding the object [tex] \psi_q [/tex] seems to make sense, but I just got into troube when trying to verify that with the inner products or norms.

So, I guess it's not working any way.

You can just disregard that and just move on. ...
To what you have said here, I will only add that the limiting procedure which you allude to in post #108 is not at all "well-defined". For example, when you say:
Let's take
[tex] \triangle_n A = a_n | \psi_n > < \psi_n | [/tex]

[tex] \triangle_n a = a_n - a _{n-1} [/tex]
... you have given no "structure" which tells how A changes with each incremental step in the supposed "limiting procedure".
 
  • #128
Response to posts #114-124

Posts #114, 115

Your answers to E.3.3 and E.3.4 look fine.
______________

Post #116

In your answer to E.3.2, you began with:
[tex] \forall \psi \in H \ , [/tex] ...
I guess at that moment you forgot that H includes all of the vectors; i.e. even those with norm different from 1.

Later on you reach:
Now all I need to prove is
[tex] < \psi | ( \int_{S_{c(a)}} | a > < a | da ) | \psi > = [/tex]
[tex] \int_{S_{c(a)}} P(a) da [/tex]
The next step should have been to 'push' both the bra "<ψ|" and the ket "|ψ>" through and underneath the integral to get

S_c(A) <ψ|a><a|ψ> da .

For some reason, you were inclined to do this only with regard to the ket (with a slight 'abuse' of notation):
[tex] < \psi | ( \ \int_{S_{c(a)}} | a > < a | da \ ) | \psi > = [/tex]
[tex] < \psi | ( \ \int_{S_{c(a)}} | a > < a | \psi > da \ ) > = [/tex]
But you got around this by invoking an "inner product" (again, with a slight 'abuse' of notation), and then you convinced yourself (quite correctly) that <ψ|a> = <a|ψ>* ... which is what you needed to take the final step.

Let's put an end to this 'abuse' of Dirac notation. Here's what you wrote:
[tex] < (( \ \sum_{S_{P(a)}} P_{a_{n}} | \psi > \ ) + ( \ \int_{S_{c(a)}} | a \prime > < a \prime | \psi > da \prime \ ) )| ( \int_{S_{c(a)}} | a > < a | \psi > da ) > = [/tex]
What you had there was the ket

a Є S_p(A) Pa|ψ> + ∫S_c(A) |a><a|ψ> da ,

which you needed to 'turn around' into a bra. That's easy to do: kets go to bras, bras go to kets, numbers go to their complex conjugates, and operators go to their adjoints. In this case, we get

a Є S_p(A) <ψ|Pa + ∫S_c(A) <ψ|a><a| da .

... And that's all there is to it. Now, you just need "slam" this expression on it's right side with the expression

S_c(A) |a'><a'|ψ> da'

and you will get the desired result. This is how to 'use' Dirac notation without 'abuse'.

______________

Post #117

This answer for E.3.5 is fine ... except for the "dangling" da:
[tex] < \psi | a > = \overline{< a | \psi >} da [/tex]
But, later on, in post #119 you correct yourself.
______________

Posts #118, 119

In post #118 you say:
1) Let me start with
[tex] \sum_{S_{P(a)}} P_{a_{n}} + \int_{S_{c(a)}} | a > < a | da = I [/tex]
, so
[tex] \int_{S_{c(a)}} | a > < a | da = I - \sum_{S_{P(a)}} P_{a_{n}} [/tex]
.

2). Take derivative of a to it at the continuous part, then
[tex] \frac{d I}{ da } da = | a > < a | da [/tex]
or in other words,
[tex] \int_{S_{c(a)}} \frac{d I}{ da } da = \int_{S_{c(a)}} | a > < a | da [/tex]
The parameter "a" is the variable of integration. It is not "free" to take a derivative with respect to it.

What you do in post #119, along similar lines, however, is correct:
[tex] D ( a \prime ) = \int_{ - \infty}^{ a \prime } \overline{< a | \psi >} < a | \psi > da [/tex]
, then
[tex] P(a) = dD /da = \overline{< a | \psi >} < a | \psi > [/tex]
Indeed, you have a "free" parameter here.

Everything else looks fine (modulo a couple of minor typos).
______________

Posts #120, 121

These look fine.
______________

Post #122
E 3.1) a)

Even though my knowledge of "tensor product" of vector spaces might not be enough, I did give a thought on how to handle this exercise.

1). How do we know the spectrum of Q is degenerate?

Unless, we have another set of eigenbasis and self-adjoint operator, said E, and we found we have problems reconciling them with Q. For example,
[tex] < E | \psi > = \int < E | q > < q | \psi > dq [/tex]
does not hold steady or meet the expectation for a state [tex] | \psi > [/tex] .
I don't understand what you meant in the above.

Next:
2). We will then believe that we need to expand the Hilbert space by adding another one in. Why don't we choose "direct sum" instead of "tensor product"?

If we we choose "direct sum", the basis will be extended as
[tex] v_1, v_2, ... v_n, w_1, ... w_n [/tex]
; we are basically just adding more eigenvalues and eigenvectors in doing so.
... Right. So, that's not what we want.
We need something like
[tex] < x | \psi > = \int < xy | \psi > dy [/tex]
or
[tex] P (xy) dxdy = | < xy | \psi > |^2 dxdy [/tex]
.

So we need to define a "product" of vector spaces that fit our needs.
Yes, this is the idea.
______________

Post #123

Looks good (... I do see one small typo, though).

However, about:
, our quickiest approach will be:
[tex] < x, y | \psi > = < y | \psi_y > < x | \psi_x > [/tex]
This is not true in general; i.e. it is true only when the state is such that x and y are "independent".
___
Our current ket space of | x,y,z,s > is of course an example.
... where "s", I assume, refers to spin. This is precisely the example I had in mind (except that I split it up into two examples: |x,y,z> and |x,s>).
___
4). If x and y are not independent, what will we get?
We will get "correlations".

Next:
For example, I can easily produce a degenerate continuous spetrum by using function f(x) = |x|.
Yes, |Q| has a continuous, doubly degenerate spectrum.
______________

Post #124

E 3.1) B).

5). In 3), we actually have used the two property of "tensor product":

[tex] ( | x_1 > + | x_2 > ) \otimes | y > = | x_1 , y > + | x_2 , y > [/tex]
[tex] | x > \otimes ( | y_1 > + | y_2> ) = | x , y_1 > + | x , y_2 > [/tex]

The last property
[tex] \alpha | x, y > = | \alpha x> \otimes | y> = | x> \otimes | \alpha y > [/tex]
will be used in
[tex] < A > = < x,y | A | x,y > [/tex]
.
Yes ... except, the last relation should read

<A> = ∫∫ <x,y|A|x,y> dx dy .
____________________

P.S. I may not be able to post again for another 3 weeks.
 

Similar threads

  • · Replies 19 ·
Replies
19
Views
4K
  • · Replies 5 ·
Replies
5
Views
2K
  • · Replies 13 ·
Replies
13
Views
2K
  • · Replies 3 ·
Replies
3
Views
6K