Understanding Projectors in Quantum Mechanics: A Mathematical Approach

  • Thread starter Thread starter Sammywu
  • Start date Start date
  • Tags Tags
    Projector Qm
  • #51
Eye,

Is this where I did wrong?

I think I shall start with this.

P(a_n) =
( \int \overline{\psi_n(x)} \varphi (x) dx ) \overline{ ( \int \overline{\psi_n(y)} \varphi(y) dy ) }

\sum_n a_n c_n \overline{c_n} =
\sum_n a_n P(a_n) =
\sum_n a_n \int \int \overline{\psi_n(x)} \varphi(x) \psi_n(y) \overline{\varphi(y)} dx dy =
\sum_n \int \int \overline{\psi_n(x)} \varphi(x) A \psi_n(y) \overline{\varphi(y)} dx dy =

I still need to see how this can be :

\int \overline{\varphi(x)} A \varphi(x) dx



Thanks
 
Last edited:
Physics news on Phys.org
  • #52
Eye,

Based on the two assumptions

\varphi = \sum_n c_n \psi_n
and
\int \overline{\psi_i} \psi_j = \delta_{i,j}
,

\int \overline{\varphi} A \varphi =
\int \overline{\varphi} A \sum_n c_n \psi_n =
\int \overline{\varphi} \sum_n c_n a_n \psi_n =
\sum c_n a_n \int \overline{\varphi} \psi_n =
\sum c_n a_n \overline{ \int \varphi overline{\psi_n} =
\sum c_n a_n \overline{c_n}

So, I verified all of these three approaches show the same expectation value.

Now, back to your question, why shall
P(a_n) = c_n \overline{c_n}
?

Any good argument about it?
 
  • #53
Eye,

Now I see where you are leading.

The probability decomposition for AM will be

P(a) =
\sum_{a_n<=a} P(a_n) =
\sum_{a_n<=a} a_n c_n \overline{c_n}
.

Is it?
 
  • #54
Now, if I use my original model of
M = 1/2 ( P_{\psi_1} + P_{\psi_2} )
or
M = 1/2 | \psi_1> < \psi_1 | + 1/2 | \psi_2> < \psi_2 |
even though
< \psi_2 | \psi_1 > not = 0
and let
P_{XM}(a)
be its probability decomposition,
this does seem to lead to a two spots measurement.

A trouble I might need to take care of is the two wavefunctions are not orthogonal.

While I recall that a multiple particle model in a book using a model
\psi = \psi_1 \psi_2
, I wonder how these two different models will turn out in this case.
 
  • #55
Any way, back to the basic postulate, it does now seem very reasonable for probability for eigenvalue

P(a_n) = < \psi_n | \varphi > < \varphi | \psi_n > = ( \psi_n , \varphi) ( \varphi , \psi_n ) =
\int \overline{\psi_n} \varphi \int \psi_n \overline{\varphi}
 
  • #56
If I use Dirac notation,

< \varphi | A | \varphi > =

\sum_n | < \psi_n | \varphi > |^2 a_n

Assuming < \psi_n | \varphi > = c_n
, so
| \varphi > = \sum_n c_n | \psi_n >

Does that look good to you?
It is correct.
_________

Any way, I thought the translation between the three notation is :

( u , v ) = < v | u > = \int \overline{v} u

Is this correct?
Looks fine.
_________
_________

... back to the basic postulate, it does now seem very reasonable for probability for eigenvalue

P(a_n) = < \psi_n | \varphi > < \varphi | \psi_n > = ( \psi_n , \varphi) ( \varphi , \psi_n ) =
\int \overline{\psi_n} \varphi \int \psi_n \overline{\varphi}
The basic postulates are the "true" starting point of Quantum Mechanics. From those postulates, one can then build the more complex versions which talk about "mixed states" and "expectation values" of observables ("nondegenerate" and "degenerate" cases) ... like on page 37 of that eBook.

Try to find a book in which those basic postulates are written in clear, concise terms. Those postulates should talk about a nondegenerate observable with a discrete set of eigenvalues, where the quantum system is in a pure state. There should also be a statement concerning the time evolution of a pure state in terms of the Schrödinger equation.

Once you understand and accept those basic postulates, then from them you will be able to derive - in a most transparent way - all of the more complex versions.

From that perspective, everything will be much clearer. I am quite sure of this.
 
  • #57
Eye,

I think I have a book that shall have a discussion of that postulate. I shall have read it but just going over it without too much deeper thought.

I always wanted to try thinking things in my logical reasoning and then compare that to what is there any way. So I tried to analyze what it is here.

What I have here is a state \varphi representing a probability and all I know is
( \varphi , \varphi ) =
< \varphi , \varphi > =
\int \overline{\varphi} \varphi = 1
.
Now with an observable A, I might have different observated value a_n and I can associate with each one of them with a state \psi_n.

By the assumption of { \psi_n } being a orthonornal basis,

\varphi = \sum_i c_i \psi_n
.

The total probability as unity shall be decomposed into components representing probability for each a_n .
1 = (\varphi, \varphi ) = ( \sum_n c_n \psi_n , \varphi ) = \sum_n c_n ( \psi_n , \varphi )
.

Since this decomposition of unity has a coresponding number to each a_n, I can assume that the probaility for each a_n will be
c_n ( \psi_n , \varphi ) = ( \varphi , \psi_n ) (\psi_n , \varphi )
.

While analyzing this, there seems to be a condition that shall be in effect, i. e all of different possible outcomes as a_n need to be independent to each other and that's what orthogonality of eigenfunctions guranteed.

If the a_n are not independent to each other, i.e. { \psi_n } is not orthonormal, then there will be some interference can be derived here.

Any way, this is my own way to try to comprehend this postulate, but it does seem to make it clearer how this postulate was formulated and also indicate how the wavefunction interference might have come into play.
 
Last edited:
  • #58
Another important fact I noticed is that c \psi_n is also an eigenfunction and regarded the same as \psi_n as logon as | c | = 1; this is related to what is stationary state.

So \varphi can be decomposed into a way such that all c_i are real. Still \sum_i c_i does not equal to one but \sum_i c_i^2 = 1 so that it will show me why [/tex] c_i^2 [/tex] not c_i is the probability.

My intuition is c_i shall be the probability but not c_i^2, this makes it clearer to me why my intuition is wrong.
 
Last edited:
  • #59
I think I have a book that shall have a discussion of that postulate. I shall have read it but just going over it without too much deeper thought.

I always wanted to try thinking things in my logical reasoning and then compare that to what is there any way. So I tried to analyze what it is here.

What I have here is a state \varphi representing a probability and all I know is
( \varphi , \varphi ) =
< \varphi , \varphi > =
\int \overline{\varphi} \varphi = 1
.
Now with an observable A, I might have different observated value a_n and I can associate with each one of them with a state \psi_n.

By the assumption of { \psi_n } being a orthonornal basis,

\varphi = \sum_i c_i \psi_n
.

The total probability as unity shall be decomposed into components representing probability for each a_n .
1 = (\varphi, \varphi ) = ( \sum_n c_n \psi_n , \varphi ) = \sum_n c_n ( \psi_n , \varphi )
.

Another way to look into this:

An abstract state can be represented by different orthonormal basis. By an unitary transformtion, we can translate a state to different representation in another basis, but its norm remians as one. So the square of its coefficiens can be conviniently used for probability representation. The basis is used as the "coordinate" of the measurable.

For example, Fourier transformation as an unitary transformation can transform a "position" represtation to a "momentum" representation.

An operator as an obserable can be only measured as certain real values; these are the eigenvalues and they work just like the "coordinate" of the measurement. Only certain probability distribution ( i.e. states ) can be exacted to one of these real values: they are the eigenfunctions and pure states.

For example, the eigenfunction of "position" observable of x_0 can be seen as the \delta ( x - x_0) .

When measuring other states, only eigenvalues will appear, but since it has non-zero coefficients on different eigenvalues so it will show a dstribution among these eigen values.

An degenerate observable has more than two orthogonal functions measured the same value, so its probability measured for this value shall be the sum of the square of their ciefficients.
 
  • #60
Sammywu said:
... there seems to be a condition that shall be in effect, i. e all of different possible outcomes as a_n need to be independent to each other and that's what orthogonality of eigenfunctions guranteed.
I am not quite sure what you mean by "independent of each other". If the |ψn> are not all mutually orthogonal, then the whole postulate 'falls apart'! ... we will no longer have ∑nP(an) = 1.

-----

Another important fact I noticed is that c \psi_n is also an eigenfunction and regarded the same as \psi_n as logon as | c | = 1; this is related to what is stationary state.
Yes, there is a connection.

------------------------------
------------------------------

Sammy, looking at all of the different points you are making, I get the sense that it might be instructive for us to go through each one of the postulates in a clear and concise way, one by one, step by step, ... . What do you say?
 
  • #61
Eye,

What I was saying about independency is that it came to my mind that interference of two wave functions will be zero when they are orthogonal. Their interference seems to be represented by their inner product.

Of course, I know the necessary condition in here is that these are all orthonormal basises. I am saying is we can ignore their interference because they are orthogonal, but it would be interesting to check the relationship between interference and the inner product of two arbitrary wave functions.

I think your suggestion of going over all postulations is good.

Thanks
 
  • #62
Sammywu said:
What I was saying about independency is that it came to my mind that interference of two wave functions will be zero when they are orthogonal. Their interference seems to be represented by their inner product.
Yes, I see.

If

|φ> = c11> + c22> ,

then

<φ|φ> = |c1|211> + |c2|222> + 2 Re{c1*c212>} .

The last term is the "interference" term, and it vanishes if <φ12> = 0 .

------------------

Is the above what you meant?

... it would be interesting to check the relationship between interference and the inner product of two arbitrary wave functions.
------------------
------------------

I think your suggestion of going over all postulations is good.
I will post something soon.
 
  • #63
Two Beginning Postulates of QM

ATTENTION: Anyone following this thread ... if you find any points (or 'near' points) of error on my part, please do point them out.
___________

P0: To a quantum system S there corresponds an associated Hilbert space HS.

P1: A pure state of S is represented by a ray (i.e. a one-dimensional subspace) of HS.
___________

Notes:

N.0.1) Regarding postulate P0, in certain contexts it is possible to associate a Hilbert space with a particular dynamical 'aspect' of the quantum system (e.g. a Hilbert space corresponding to "spin", decoupled from, say, "position").

N.1.1) In postulate P1, a "ray" is understood to represent the "(pure) state" of a single quantum system. Some physicists prefer to let those terms designate an ensemble of "identically prepared" quantum systems. Such a distinction becomes relevant only in cases where one considers possible interpretations of the theory.

N.1.2) A ray is determined by anyone of its (non-zero) vectors. Our convention is to use a "normalized" (i.e. to unity) ket |ψ> to designate the corresponding ray, and hence, the corresponding pure state. This means that two normalized kets |ψ> and |ψ'> which differ by only a phase factor (i.e. |ψ'> = α|ψ>, where |α| = 1) will represent the same "ray", and hence, the same "state".
___________

Exercise:

E.1.1) What, if anything, is wrong with the following?

Suppose that the kets |φ1> and |φ1'> represent the same state. Then, the kets |ψ> and |ψ'> given below will also represent the same state:

|ψ> = c11> + c22> ,

|ψ'> = c11'> + c22> .
___________
 
Last edited:
  • #64
Eye,

About the interference of wave functions: Yes. You got what I meant. I will have to ruminate over your simple answer, though.

Answer to the exercise:

If
| \psi \prime &gt; = a | \psi &gt;
| \varphi_1 \prime &gt; = a_1 | \varphi_1 &gt;
| \varphi_2 \prime &gt; = a_1 | \varphi_2 &gt;
then
a c_1 | \varphi_1 &gt; + a c_2 | \varphi_2 &gt; =
a_1 c_1 | \varphi_1 &gt; + a_2 c_2 | \varphi_2 &gt;
.

One solution to it is:
a = a_1 = a_2
i. e. they are mutiplied by the same phase factor.

If
a not = a_1
, then
| \varphi_1 &gt; = ( a_2 -a ) c_2 / ( a - a_1 ) c_1 | \varphi_2 &gt;
; i. e. | \varphi_1 &gt; and | \varphi_2 &gt; have to be the same state.

This shows that statement is in general incorrect except in the two solutions I show above.
 
  • #65
Actually, for the second solution of them being the same state, there are other troubles to take care, | (a_1 - a ) / (a - a_2 ) \ needs to be one.

It does not seem to be easy to get solution for this. I have to think about how to resolve it. The possibility is this solution will not even work.
 
  • #66
| ( a_1 - a ) c_1 / ( a_2 -a ) c_2 | =1 is equivalent to | a_1 - a | / | a_2 -a | = | c_2 | / | c_1 | ; knowing | a | = | a_1 | = | a_ 2 | = 1, we can view a, a_1 and a_2 as three normal vectors in the unit circle on the Complex plane; a shall be chosen as a point on the unit circle such that ratio of the cord length between a_1 and a over the cord length a_2 and a is | c_2 | / | c_1| .
 
Last edited:
  • #67
First off, you have attempted to answer a question slightly more general than the one I posed. In my question, there was no |φ2'>. But that doesn't really matter. ... We'll use your version of the question.

You start off well.
If
| \psi \prime &gt; = a | \psi &gt;
| \varphi_1 \prime &gt; = a_1 | \varphi_1 &gt;
| \varphi_2 \prime &gt; = a_1 | \varphi_2 &gt;
then
a c_1 | \varphi_1 &gt; + a c_2 | \varphi_2 &gt; =
a_1 c_1 | \varphi_1 &gt; + a_2 c_2 | \varphi_2 &gt;
But your final conclusion suggests to me that the main point has been missed.
This shows that statement is in general incorrect except in the two solutions ...
Let's look at your second "solution". You write:
If
a not = a_1
, then
| \varphi_1 &gt; = ( a_2 -a ) c_2 / ( a - a_1 ) c_1 | \varphi_2 &gt;
; i. e. | \varphi_1 &gt; and | \varphi_2 &gt; have to be the same state.
In this case, you are right. If |φ1> and |φ2> themselves represent the same state, then so too will |ψ> and |ψ'>.

Now, let's look at your first "solution".
One solution to it is:
a = a_1 = a_2
i. e. they are mutiplied by the same phase factor.
Can we say that the statement is correct in this case? For the statement to be correct, we must have a1 and a2 as arbitrary free parameters, except for the constraint |a1| = |a2| = 1; but this solution produces an additional constraint over and above that.

... So, your conclusion should have been:

The statement is incorrect in all cases, except when |φ1> and |φ2> represent the same state.

------------------------

NOW ...
Just to make sure that the main point hasn't been lost in all of this abstraction, let's look at a concrete example.

Given the state

c11> + c22> ,

then

i(c11> + c22>)

represents the same state,

but

ic11> + c22>

does not (unless, of course, |φ1> and |φ2> represent the same state).

In the first case, we have inserted a "global phase-factor", and that is OK. In the second case, however, we have inserted a "relative phase-factor", and that is not OK.

------------------------

Finally, for the sake of completeness, I offer a solution of my own along the same lines as the one you gave above.
____

First, we assume:

(i) c1, c2 ≠ 0 ,

and

(ii) |φ1> and |φ2> are linearly independent .

For, otherwise, it follows trivially that |ψ> and |ψ'> will define the same ray (i.e. represent the same state).


Next, by considerations similar to yours above, we note that

|ψ> and |ψ'> define the same ray if, and only if

[1] c1(a - a1)|φ1> + c2(a - a2)|φ2> = 0 ,

where (and this is the important part!) the parameters a1 and a2 are completely arbitrary except for the constraint |a1| = |a2| = 1.

We now reach our conclusion. We say: but from assumptions (i) and (ii) we see that relation [1] holds iff a = a1 = a2, implying that a1 and a2 cannot be completely arbitrary (except for the constraint) as required; therefore, |ψ> and |ψ'> do not define the same ray.

:smile:
------------------------
 
Last edited:
  • #68
Eye,

I am sorry. I did not notice there is no prime on the second \varphi_2.

I agree to your answer.

One part of my calculation was wrong. Somehow | c_1 | / | c_2| = 1 slipped into my mind, which is not true, so I corrected my answer in case somebody else were reading this post. Any way, that was showing even if they are all "same" state, "a" need to be chosen in a correct math. way.
 
Last edited:
  • #69
Another Postulate

P2: To a physical quantity A measurable on (the quantum system) S, there corresponds a self-adjoint linear operator A acting in HS. Such an operator is said to be an "observable".
___________

Notes:

N.2.1) From "The Spectral Theorem" for self-adjoint operators, it follows that:

(a) A has real eigenvalues;

(b) the eigenvectors corresponding to distinct eigenvalues are orthogonal;

(c) the eigenvectors of A are complete (i.e. they span HS).

The set of eigenvalues of A is called the "spectrum" of A. If A has a continuous spectrum, then the eigenvectors of A are said to be "generalized" and A is said to satisfy a "generalized" eigenvalue equation.

N.2.2) We now enumerate three special cases for A:

(1) A has a discrete, nondegenerate spectrum; then

A = ∑n an|u><un| .

(2) A has a discrete (possibly degenerate) spectrum; then

A = ∑n anPn .

(3) A has a continuous, nondegenerate spectrum; then

A = ∫a |a><a| da .

In each case, the RHS of each of the above relations is referred to as the "spectral decomposition" for A.

Case (1) is a particularization of case (2) (with Pn = |un><un|). For the general case of (2), the an are the eigenvalues of A and the Pn are the corresponding eigenprojectors.

Examples of case (3) are any of the components (Qj or Pk) of the position or momentum observables.
___________

Exercises:

E.2.1) From N.2.1) (b) (i.e. the eigenvectors corresponding to distinct eigenvalues are orthogonal), show that for a spectral decomposition of the type in N.2.2) (2), i.e.

A = ∑n anPn ,

it follows that

PjPk = δjkPk .

From N.2.1) (c) (i.e. the eigenvectors of A are complete), show that

nPn = I .

E.2.2) Use, as an example, the observable Q (for the position of a spinless particle moving in 1-dimension) to explain why the eigenkets |q> are said to be "generalized" and, therefore, why Q is said to satisfy a "generalized" eigenvalue equation.
___________
 
  • #70
Eye,

I).
I think you have a typo here. Did you miss a subscriptor n here in this paragraph.

N.2.2) We now enumerate three special cases for A:

(1) A has a discrete, nondegenerate spectrum; then

A = ∑n an|un><un| .

II)
Also, For acontinuous spectrum, why are the eigenvectors called " generalized"? Does this have anythings to do with the fact that most likely they will be "generalized" function such as dirac-delta function isntead of regular functions?

III). Answer to the Exercise. It's actually a little difficult because you did not define eigenprojectors.

So, I just gave a guess on this:

P_n ( \sum_i c_i u_n_i + b u_\bot ) = \sum_i c_i u_n_i
where
u_n_i are eigenvectors of a_n and u_\bot \bot all u_n .

Under that assumption,
P_n^2 ( \sum_i c_i u_n_i + b u_\bot ) = P_n ( \sum_i c_i u_n_i ) =
\sum_i c_i u_n_i

so, P_n^2 = P_n .

If i does not equal to j, then
P_i P_J ( \sum_k c_k u_i_k + \sum_l c_l u_j_l + b u_\bot ) =
P_i ( \sum_l c_l u_j_l ) = 0
; so P_i P_j = 0 .

That shall take care of the first part of E.2.1).

The question could be asked is how I can prove all u can be decomposed to
\sum_k c_k u_i_k + \sum_l c_l u_j_l + b u_\bot
.

I think setting
c_k = &lt; u_i_k | u &gt;
c_l = &lt; u_j_l | u &gt;
and
b u_\bot = u - \sum_k c_k u_i_k - \sum_l c_l u_j_l
could take care of that.

I might verify this part later.

I am still working on the other two exercises.
 
  • #71
Because { u_n_i } spans the Hilbert space, any u can be expressed as:
\sum_n \sum_i c_n_i u_n_i

\sum_n P_n ( \sum_j \sum_i c_j_i u_j_i ) =
\sum_n \sum_j \sum_i c_j_i P_n ( u_j_i ) =
( because
P_n ( u_j_i ) = u_n_i
when n =j,
P_n ( u_j_i ) = 0
when n , j not equal,
)
\sum_n \sum_i c_n_i u_n_i

That takes care of
\sum P_n = I
.
 
  • #72
About Q:
1) Our universe can be regarded as a system being observed for years.
2). We have noted every "position" in this system can be described by three real values by setting a a 3-dim reference "coordinate" or frame.
3) These three real values (q_x, q_y, q_z ) have to be regarded as eigenvalues of 3 different observables ( i. e. Operators Q_x, Q_y, Q_z ) if this approach of Hilbert Space and states is to be used to investigate the system.
4). These real values (q_x, q_y, q_z ) have been observed to form three continuous real lines ( three continuous spetra ).
5). By the arguments above and the expansion postulate, the base of our system shall be constituted of a Hilbert Space spanned by these three sets of eigenvectors, which has a minimum one-to-one relationship with the eigenvalues by assumptions.
6) To Simplify our analysis, we can just look at anyone of the three observables, said Q_x.
To be continued ...
7). For
 
  • #73
Back in post #67, I "messed up"! There I wrote:

Now, let's look at your first "solution".
One solution to it is:
a = a_1 = a_2
i. e. they are mutiplied by the same phase factor.
Can we say that the statement is correct in this case? ...
-----

The continuation of what I wrote there is wrong! The kets |φ1'> and |φ2'> are specific kets, not a general "family" of kets. This means that as long as a1 = a2, then there exists a value of a (namely, a = a1 (= a2)) such that |ψ'> = a|ψ>. Thus, the overall conclusion should have been:

The statement is incorrect in all cases, except: (i) when |φ1> and |φ2> represent the same state, or (ii) a1 = a2 in the relations |φk'> = akk> (k = 1,2).

This means that your first "solution" is OK. I would only have added another sentence something like this:

So, we see that if a1 = a2 , then |ψ'> and |ψ> represent the same state.

... As you can see, the general "structure" of our question is like this: given "a1" and "a2", does there exist an "a"?

--------------

Now, however, this means that there is still a slight difficulty with your second "solution":
If
a not = a_1
, then
| \varphi_1 &gt; = ( a_2 -a ) c_2 / ( a - a_1 ) c_1 | \varphi_2 &gt;
; i. e. | \varphi_1 &gt; and | \varphi_2 &gt; have to be the same state.
In the first "solution", you have covered the case a1 = a2. So, now you need to cover the case a1 ≠ a2. This must be the starting point for the second "solution". ... How can we continue?

Well ... suppose there exists an a. Since, a1 ≠ a2, then a must be different from one of the ak. Suppose for definiteness – without loss of generality – that a ≠ a1. Then, ... [the rest of your "solution" is fine].

(Do you understand the meaning of the expression "suppose for definiteness – without loss of generality"?)

-----
... Sorry about the confusion on my part.

Sometime soon I hope to fix that post (#67).
-----------
But now I see that "editing privileges" have changed, so it will just have to stay that way!
______________
 
Last edited:
  • #74
I).
I think you have a typo here. Did you miss a subscriptor n here in this paragraph.

N.2.2) We now enumerate three special cases for A:

(1) A has a discrete, nondegenerate spectrum; then

A = ∑n an|un><un| .
Yes, I missed a subscript. (I would go put that in now, but there's no more "edit" for me for that post!)
__________________
II)
Also, For acontinuous spectrum, why are the eigenvectors called " generalized"? Does this have anythings to do with the fact that most likely they will be "generalized" function such as dirac-delta function isntead of regular functions?
Yes, that is the basic idea. More generally, we can say that the resulting "eigenvectors" will not be square-integrable; we will have <a|a> = ∞, and so, technically the "function" a(x) ≡ <x|a> will not belong to the Hilbert space of "square-integrable" functions. Nevertheless, this difficulty is not in any way serious. In a self-consistent way, we find that we are able write <a|a'> = δ(a - a') .
__________________
III). Answer to the Exercise. It's actually a little difficult because you did not define eigenprojectors.
Yes, some details have been omitted.

Here are some (but not all) of those details (for the discrete case). [Note: By assumption, A is a self-adjoint operator according to a technical definition which has not been stated in full (however, in post #20 of this thread, part of such a definition was given). Thus, in the following, certain related subtleties will not be given full explicit attention.]

Recall (b) and (c) from N.2.1):
(b) the eigenvectors corresponding to distinct eigenvalues are orthogonal;

(c) the eigenvectors of A are complete (i.e. they span HS).
Define En = { |ψ> Є HS │ A|ψ> = an|ψ> } . From (b), En En' , for n ≠ n'; so,

(b') En כ En' , for n ≠ n' .

From (c), it follows that

(c') the vectors of UnEn span HS ,

Now, from (b') and (c'), it follows that for any |ψ> Є HS there exist uniquen> Є En such that

|ψ> = ∑nn> .

From the uniqueness of the |ψn>, we can define the "eigenprojectors" Pn by

Pn|ψ> = |ψn> .

... Alternatively, we can do it this way. Each En itself is a (closed (in the sense of "limits")) linear subspace of HS, and, therefore, has a basis, which we can set up to be orthonormal. Let |unk> be such a basis, where k = 1, ... , g(n) (where g(n) is the degeneracy (possibly infinite) of the eigenvalue an). We then define

Pn = ∑k=1g(n) |unk><unk| .

You can then convince yourself that this definition of Pn is independent of choice of basis.
__________________

So, let's continue.
So, I just gave a guess on this:

P_n ( \sum_i c_i u_n_i + b u_\bot ) = \sum_i c_i u_n_i
where
u_n_i are eigenvectors of a_n and u_\bot \bot all u_n .
This "guess" was fine.

Next:
If i does not equal to j, then
P_i P_J ( \sum_k c_k u_i_k + \sum_l c_l u_j_l + b u_\bot ) =
P_i ( \sum_l c_l u_j_l ) = 0
; so P_i P_j = 0 .
Just one small problem: you already used "c" in the first sum over "k"; in the sum over "l" you need to use a different letter, say, "cl" → "dl". Then, your answer is fine.

Next:
The question could be asked is how I can prove all u can be decomposed to
\sum_k c_k u_i_k + \sum_l c_l u_j_l + b u_\bot
.

I think setting
c_k = &lt; u_i_k | u &gt;
c_l = &lt; u_j_l | u &gt;
and
b u_\bot = u - \sum_k c_k u_i_k - \sum_l c_l u_j_l
could take care of that.
Yes. Your idea works just fine. (Note: You need to make the change "cl" → "dl", or something like it.)
__________________

And now for the next part.
Because { u_n_i } spans the Hilbert space, any u can be expressed as:
\sum_n \sum_i c_n_i u_n_i

\sum_n P_n ( \sum_j \sum_i c_j_i u_j_i ) =
\sum_n \sum_j \sum_i c_j_i P_n ( u_j_i ) =
( because
P_n ( u_j_i ) = u_n_i
when n =j,
P_n ( u_j_i ) = 0
when n , j not equal,
)
\sum_n \sum_i c_n_i u_n_i

That takes care of
\sum P_n = I
.
Yes!
__________________
 
  • #75
About Q:
1) Our universe can be regarded as a system being observed for years.
2). We have noted every "position" in this system can be described by three real values by setting a a 3-dim reference "coordinate" or frame.
3) These three real values (q_x, q_y, q_z ) have to be regarded as eigenvalues of 3 different observables ( i. e. Operators Q_x, Q_y, Q_z ) if this approach of Hilbert Space and states is to be used to investigate the system.
4). These real values (q_x, q_y, q_z ) have been observed to form three continuous real lines ( three continuous spetra ).
5). By the arguments above and the expansion postulate, the base of our system shall be constituted of a Hilbert Space spanned by these three sets of eigenvectors, which has a minimum one-to-one relationship with the eigenvalues by assumptions.
6) To Simplify our analysis, we can just look at anyone of the three observables, said Q_x.
To be continued ...
7). For
1) All we really care about (for the moment, at least) is whether or not our "model" will "explain" the "observed phenomena".

2) This is an essential part of our "model". It doesn't necessarily have to apply to the universe as a whole, but only some part of it (e.g. the "laboratory").

3) Yes. But we still have to define the Hilbert space and set up an "eigenvalue equation".

4) Yes ... at least to some (very good) approximation. We will use this hypothesis in our "model".

5) This is curious. I was expecting to define the Hilbert space first, and then define Q on it afterwards. I was expecting something like: let H be the space of all square-integrable functions R3C ... and then we set up an "eigenvalue equation"

Q f(q) = q f(q) .

Afterwards, we would then "find out" (or "show") that the components of Q are is self-adjoint, etc ... .

6) This is actually the "starting point" of E.2.2). You have taken some time to consider its 'justification' – that is good.

--------------------------

I think I'm just going to give the answer I had in mind. (I basically said it already in the previous post.) When we set up the eigenvalue equation for Q, we find that there are no "square-integrable" functions f such that [Qf](q) =qf(q) . So, strictly speaking, with respect to our Hilbert space, Q has no eigenfunctions. But then again, we can still make 'sense' of the "eigenvalue equation" ... and so, we say it has been "generalized", and that the solutions f are "generalized" eigenfunctions.
 
  • #76
Eye,

Yes, it's strange. We can no longer edit our previous responses.

I agree that you caught me. I shall use a different letter for it.

Actually, I have no idea what you mean by this --
"suppose for definiteness – without loss of generality"?
Would you mind elaborating it?

About Q, I guess I actually went back to justify the Hilbert space. There could be more I can explore it a little latter; you have already noticed that. Basically, what will the Hilbert space look if I build it with the eigenvectors of the Q operator? Since we know its eigenvectors are not square integrable in the real line, This hilbert space might be bigger than the space of square integrable functions. Some thorough knowledge of functional analysis and probability theory might be needed here.

Actually, that also can lead to another question. That justification brings me a Hilbert space with probability decomposition but not necessarily Complex value coefficiened. I think the need of Complex value coefficient seems to be explanable by the scattering of electron diffraction or its interference.

My first response to this question was like this:
I did not submit it because I am seeing some holes there, but it's wotrthy to show a different perspective in approaching this question.
I was not sure the scope of your question, so I decided to go back to discuss what are "positions" observed ( as real value eigenvaues as we can see ) and the assumed Q operator associate with it. .

--->
In order to define Q | \psi &gt;, you have to have a background manifold, then you can say Q | \psi &gt; = q | \psi &gt;, where q is not a constant.

In order to look for an answer of
Q | \psi_n &gt; = q | \psi_n &gt; = q_n | \psi_n &gt;
where q_n needs to be a constant.

If | \psi_n > is a function f(q) of the coordinate q of the manifold M, then q*f(q) = q_n*f(q) ; | \psi_n &gt; will have to be a function f(q) that is one when q equals to q_n and zero elsewhere.

---> Then this will lead to f(q) is not a square integrable function.
 
  • #77
This is what I was going to continue on the Hilbert space built by the Q eigenvectors:

7). We can see two approches to expand the space now.
a. Discreet approach : | | u &gt; = \sum_n a_n | u_n &gt; or
A = \sum_n a_n | u_n &gt; &lt; u_n |
b. Continuous approach : | u &gt; = \int a_n | u_n &gt; or
| A = \int a | u &gt; &lt; u| du

Note I used a for the coefficient instead of u. It sounds better to me in that it shows that's a function of u but dependent on A. It's more comparable to the discreet notation too. Hope you agree.

8). Of course, if by the postulation N.2.2) (3), we only need to continue with 7.b.

I have to pause here before I can continue.
 
  • #78
9). I paused to ponder about what this integration means. I think, it's a path integral of a single parameter of a family of operators | q > < q | ( u )( I now use q for the ket instead of u, and use u as the parameter to denote this family of operator ) and a(u) is a certain coeffient of A for the subcomponent of the operator | q > < q |.

Now, we will have to think what is the integration and differential of operators.

In a completely abstract setup with infinite and possibly uncountable basis, to define an operator's integration will have to deal with something like examing the change | q &gt; &lt; q | ( u ) | \psi &gt; of the faimily of operator | q > < q | ( u ) for all \psi in H. I will assume I do not need to go so far.

10). Bottom line here is that we will face a problem that the eigenvector | q_0 &gt; of the eigenvalue q_0 is unable to be represented by such an integral except when setting the a(u) as a function such that its value is \infty when u = q_0 and zero elsewhere, but its integration will be one.

11). In a way, if we already assume only square integrable functions are legitimate coefficients, then this is basically admitting there is no true eigenvalues or eigenvectors for operator Q. The eigenvalue we have observed as a point q_0 might be actually q_0+\triangle q.

12). Note I am able to separate q and u in the integration. The equation of with equating q and u actually is basically a special case when the parameter u was set to be the coordinate itself.

13). Now back to 7.a, I will try to show whether there is possibility that we can set up a Hilbert space that includes the square integrable functions and the generalized functions in a different way.

Pause...
 
  • #79
14). Back to 9), I have said that using
A = \int a(u) | q &gt; &lt; q | ( u ) du
, we can represent a general representation of "mixed" states.
We will also find out if we trsanform the parameter to another parameter v. then
A = \int a(v) | q &gt; &lt; q | ( v ) (du/dv) dv .
So the coefficient for a new parameter v will be a(v)*(du/dv).
The coefficient will change with the introduction of a different parameter.
If we intend to standardize the coefficient, the easiest choice will be using q as the standard parameter, and so a(q) could be used to represent a state, and
A = \int a(q) | q &gt; &lt; q | ( q ) dq
or
A = \int a(v) | q &gt; &lt; q | ( v ) (dq/dv) dq .

15. If we compare this to a discreet case in that
A = \sum_n a_n | q_n &gt; &lt; q_n |
, we can note it's like we place q_n on a real line and
\int_\infty^{q_0} a(q) | q &gt; &lt; q | ( q ) dq \cong
\sum_{q_n &lt;= q_0} a_n | q_n &gt; &lt; q_n |

16. Back to 7.a, if we want to build a Hilbert space in which the state can be a discreet sum of eigenvectors of the "position" eigenvalues, we will write
\sum_n a_n | q_n &gt; &lt; q_n | .
Looking into 15, is there a way we can have both forms of summation and integration coexist. I believe I have seen this in probability theory, you can set a probability distribution like this P([- \infty, a ]) where P([- \infty, - \infty ]) = 0 and P([- \infty, \infty ]) = 1; also it shall be a incresing function. This probability distribution is not necessary continuous or differentiable at everywhere, where it's differentiable the derivative will be square integrable and where it's not dfferentiable it will have "jump" points whose "generalized" derivatives just work similar to delta function.

Pause
 
  • #80
17) So, we can see the derivative of P([ - \infty, q]), denoted as f(q), is related to the a(q). To clarify their relationship, I need to add the conditions for a(q) that you might forget. For a mixed state, in a discreet case, \sum_n a_n = 1 ; so for a continuous case, I would say \int a(q) dq = 1 is needed.
By that, we can see a(q) is f(q).
Note, a(q) is not the wavefunction \psi(q) then, beacuse
\int \overline{\psi(q)} \psi(q) dq = 1
If we want to relate them, then
\overline{\psi(q)} \psi(q) = a(q)
seems to be a possible solution.
Actually, there is an issue here, which is related to the exercise you show as c_1 \psi_1 + c_2 \psi_2 | = c_1 \psi_1 \prime + c_2 \psi_2 \prime .
 
  • #81
19). To illustrate this, I need to differntiate | q_n \prime &gt; from | q_n &gt; in that | q_n \prime &gt; = a_n | q_n &gt; where | a_n | = 1 but a_n |= 1 .
First, | q_n \prime &gt; &lt; q_n \prime | = | q_n &gt; &lt; q_n | .
so
\int a(q \prime ) | q \prime &gt; &lt; q \prime | dq \prime = \int a(q) | q &gt; &lt; q | dq .
No way to distinguish two "mixed" states from this point of view.
If we look from the perspective of a ket, compare
\int c(q \prime ) | q \prime &gt; dq \prime
to
\int c(q) | q &gt; dq
, even if c is the same function, they could be two different kets by the exercise we have shown in that even if | q \prime &gt; and |q &gt; are the "same", their complex linear combinations are not the "same", and the integration here can be viewed as a continuous linear combination of infinitely many "same" kets. Note these kets are assocaied with a "pure" state though.
 
  • #82
Yes, it's strange. We can no longer edit our previous responses.
For me, it is not only "strange", but also, "too bad". This means that (apart from any 'embarrassment' that incorrect posts will remain "permanently" on line) the data base, as a whole, as a 'resource' for someone who just "surfs-in" (looking for information) will no longer be as reliable as it could have been. This is unfortunate. Someone "surfing" the net may arrive at a post in some thread and think that what is written there is correct without realizing that several posts later on a comment has been made explaining how that post was in fact incorrect.

I was envisioning that this website would become a real reliable "source" of accurate information. Now, I see that as far my own posting is concerned, this will only be possible with additional 'care', over and above the usual amount, to make sure that posts are placed "correctly" at the onset (or shortly thereafter). Given my own limits of "time" and "knowledge", such a constraint may prove to be too demanding.
_______________
Actually, I have no idea what you mean by this --
"suppose for definiteness – without loss of generality"?
Would you mind elaborating it?
Sometimes, in the midst of a mathematical proof, one reaches a stage where a certain proposition P(k) will hold for at least one value of k. This particular value of k, however, is 'unknown' but nevertheless 'definite'. (For example, in the case of your "solution" above, the proposition P(k) was simply "a ≠ ak", and this had to be true for at least one of k =1 or k =2.)

Moreover, it is sometimes the case that the continuation of the proof proceeds in 'identical' fashion regardless of the particular value of k for which P(k) is true. (This was indeed the case for your "solution".) So, instead of saying that P(k) is true for some 'definite' value of k, say k = ko, where ko is 'unspecified', one says "suppose for definiteness that P(1) is true", and since the proof is the 'same' for any other 'choice' of k, one adds the remark "... without loss of generality".

The statement is, therefore, a sort of 'shorthand' which allows one to bypass certain 'mechanical' details and go straight to the essential idea behind the proof.
_______________
Basically, what will the Hilbert space look if I build it with the eigenvectors of the Q operator? Since we know its eigenvectors are not square integrable in the real line, This hilbert space might be bigger than the space of square integrable functions. Some thorough knowledge of functional analysis and probability theory might be needed here.
In the "functional analysis" approach, one begins with a Hilbert space of square-integrable functions RC. The 'justification' for this comes about from the Schrödinger equation (in "x-space") coupled with the Born probability rule that ψ*(x)ψ(x) is the "probability density", where the latter of these implies that the (physical) wavefunctions are all square-integrable. Thus, the probability P(I) of finding the particle in the (non-infinitesimal) interval I is given by

P(I) = (ψ, PIψ) ,

where PI is the "projector" defined by

[PIψ](x) ≡
ψ(x) , x Є I
0 , otherwise

and we have defined an "inner product"

(φ, ψ) = ∫φ*(x)ψ(x) dx .

This 'family' of projectors PI already contains in it the idea of |q><q|, since they are connected by the simple relation

P(a,b) = ∫ab |q><q| dq .

... Now, let's look more closely at what you say:
This hilbert space might be bigger than the space of square integrable functions.
Here, you are suggesting the idea of "building" a space from the |q>'s in such a way that those objects themselves are included in the space. I have never thought abut such a proposition in any detail. Nevertheless, the original Hilbert space would then be seen as "embedded" in a larger 'extended' vector space which would include the |q>'s (and whatever else).

For the record, you may want to know the 'technical' definition of a "Hilbert space" H:

(i) H is a "vector space";

(ii) H has an "inner product" ( , );

(iii) H is "complete" in the "induced norm" ║ ║ ≡ √( , );

(iv) H is "separable".

The last of these is usually not included in the definition. I have put it in here, since the Hilbert spaces of QM are always "separable". You may want to 'Google' some these terms or check at mathworld or Wikipedia, or the like.

Note that such a notion of an "extended" space is used in what is called a "rigged" Hilbert space. I do not know much about such a construction and, in particular, I am unsure as to what its 'utility' is from a 'practical' point of view.

There is also the "Theory of Distributions" (or "Distribution Theory"), which deals with this idea of "generalized" functions (i.e. "distributions") in a formally rigorous way.
_______________
Actually, that also can lead to another question. That justification brings me a Hilbert space with probability decomposition but not necessarily Complex value coefficiened. I think the need of Complex value coefficient seems to be explanable by the scattering of electron diffraction or its interference.
So far, we have been viewing the situation from a "static" perspective. As soon as we admit "motion" into the picture, then complex-valued coefficients come into play by way of necessity.

Think of a (time-independent) Hamiltonian, and the Schrödinger equation

ihbart|ψ(t)> = H|ψ(t)> .

With |φn> a basis of eigenkets such that H|φn> = Enn> , we then have general solutions of the form

|ψ(t)> = ∑n exp{ -iEnt / hbar } cnn> .

There is no way 'around' this. The coefficients must be complex-valued.

Your example of "diffraction" or "interference" appears (to me) to be a special case of this general fact. On the other hand, we know that such problems can be 'treated' by the formalism of "classical optics", in which case the use of complex-valued coefficients is merely a matter of 'convenience', and not one of 'necessity' (so, I'm not so sure that this is in fact a 'good' example).
_______________
In order to define Q | \psi &gt;, you have to have a background manifold, then you can say Q | \psi &gt; = q | \psi &gt;, where q is not a constant.
You mean: Q|ψq> = q|ψq>, where q is not a constant.
---> Then this will lead to f(q) is not a square integrable function.
Yes. ... And as I mentioned above, "Distribution Theory" handles this 'difficulty' in a perfectly rigorous way.
_______________
 
Last edited:
  • #83
7). We can see two approches to expand the space now.
a. Discreet approach : | | u &gt; = \sum_n a_n | u_n &gt; or
A = \sum_n a_n | u_n &gt; &lt; u_n |
b. Continuous approach : | u &gt; = \int a_n | u_n &gt; or
| A = \int a | u &gt; &lt; u| du

Note I used a for the coefficient instead of u. It sounds better to me in that it shows that's a function of u but dependent on A. It's more comparable to the discreet notation too. Hope you agree.
Here are some "notational" details:

The 'continuous analogue' of the notation for the 'discrete case'

[1] A = ∑n an|un><un|

is

[1'] A = ∫ a(s)|u(s)><u(s)| ds .

What you wrote, i.e. (note: I have put "a" → "a(u)")

[2'] A = ∫ a(u)|u><u| du ,

is the analogue of

[2] A = ∑n an|n><n| .

Finally, the analogue of

[3'] A = ∫ a |a><a| da

is

[3] A = ∑a_n an|an><an| .
_______________
9). I paused to ponder about what this integration means. I think, it's a path integral of a single parameter of a family of operators | q > < q | ( u )( I now use q for the ket instead of u, and use u as the parameter to denote this family of operator ) and a(u) is a certain coeffient of A for the subcomponent of the operator | q > < q |.
I don't see how it can be construed as a "path integral". In a path-integral formulation of the problem for a particle moving in one-dimension, the single parameter q is construed a function of time, i.e. q(t), where that function is varied over all 'possible' functions on t Є [t1, t2] subject to the constraint δq(t1) = δq(t2) = 0. We would then have

<q(t2)|q(t1)> = a path integral .

But here, the 'closest' thing I can see is

<q'|q> = δ(q' - q) .

In short, a "path integral" can come into play once we consider the "time evolution" of the quantum system. Right now, we are only concerned with the situation at a single 'given' time.
_______________
Now, we will have to think what is the integration and differential of operators.

In a completely abstract setup with infinite and possibly uncountable basis, to define an operator's integration will have to deal with something like examing the change | q &gt; &lt; q | ( u ) | \psi &gt; of the faimily of operator | q > < q | ( u ) for all \psi in H. I will assume I do not need to go so far.
Now, the "family" of operators you are considering, what I will call |q><q|, is very much like a 'derivative' of the projector PI which I mentioned before; i.e.

[PIψ](x) ≡
ψ(x) , x Є I
0 , otherwise .

Let us define E(q) ≡ P(-∞,q). Then, 'formally' we have

dE(q) = |q><q| dq .

The LHS is the 'formal' expression for the "differential of the spectral family" in the context of "functional analysis"; the RHS is the "Dirac" equivalent.
_______________
10). Bottom line here is that we will face a problem that the eigenvector | q_0 &gt; of the eigenvalue q_0 is unable to be represented by such an integral except when setting the a(u) as a function such that its value is \infty when u = q_0 and zero elsewhere, but its integration will be one.

11). In a way, if we already assume only square integrable functions are legitimate coefficients, then this is basically admitting there is no true eigenvalues or eigenvectors for operator Q.
Yes. And this is where "Distribution Theory" comes in.


... The eigenvalue we have observed as a point q_0 might be actually q_0+\triangle q.
I don't see this (... unless we take the limit Δq → 0).
_______________
_______________

... As it turns out, unfortunately, starting this week and continuing on for the next several months(!), I will become very busy. Consequently, I will have little time for any significant activity in the Forum here. I have already reduced my posting to only this thread alone (over the last few weeks).

This week, however, I still do hope to at least get to the next two postulates and connect them to the original issue which was of concern – "expectation values", "mixed states", and the "Trace" operation. If you recall, it was matters of this kind which caused me to ask you if you had gone over the postulates in a clear, concise way.

... After that, there will be only one more postulate, that of "time evolution". If we deal with that here, I must tell you in advance that my input into this thread will 'evolve' only very slowly.
_______________
 
  • #84
Eye,

Thanks for your reply.

I think the conecpt of projector of interval is more straightforward and better than my approach, even though I think the way I approach it can be proved the same eventually. There is just a misunderstnding here. maybe I shall not use the word "path integral"; I did not mean to associate that integration with any time parameter. The parameter is just any real line in this case. Of course, in this case, I will have to be able to define how to integrate a operator function of a real line. The idea of projector of interval and so the point projector being its derivative takes care of the issue of what is the integration here.
 
  • #85
Eye,

Sorry about this stupid question.

But what does LHS and RHS stand for? I can't find it in mathworld.

Thanks
 
  • #86
Actually, I did a little bit verification here to see how this is derived.

The probability P(I) of finding the particle in the (non-infinitesimal) interval I is given by

P(I) = (ψ, PIψ) ,
-----------------------------------------

First, in a discreet case,
P(I) = \sum_n (\psi, q_n) (q_n , \psi)
Take P_I \psi = \sum_n (\psi, q_n) \q_n ,
(\psi, P_I \psi ) = \sum_n \overline{( \psi, q_n )} ( \psi , q_n) =
\sum_n ( q_n , \psi ) ( \psi , q_n) =

Then, translate into continuous case,
P(I) = \int_a^b (\psi, q) (q ,\psi) dq
Take P_I \psi = \int_a^b (\psi, q) | q &gt; dq ,
(\psi, P_I \psi ) = ( \psi, \int_a^b ( \psi, q ) | q &gt; dq ) =
\int_a^b \overline{( \psi, q )} ( \psi , q) dq =
\int_a^b ( q , \psi ) ( \psi , q) dq =

Now, this looks better.
 
Last edited:
  • #87
Sammywu said:
Eye,

Sorry about this stupid question.

But what does LHS and RHS stand for? I can't find it in mathworld.

Thanks
"LHS" stands for "left-hand-side"; "RHS" stands for "right-hand-side". :smile:
 
  • #88
Eye,

Thanks. I actually thought they could stand for some special Hilbert spaces.

Any way, your mention of "rigged" Hilbert space probably is what I was led to do with a ket defined as a function series { f[SUB}n[/SUB] } and
lim_{ n \rightarrow \infty } \int_{-\infty}^\infty f_n = 1
. So all kets can be treated as a funcion series. Just as you said, it might not be of any practical use. I guess there is no need to continue.

Any way, I have gone thru an exercise showing me that I can construct a "wavefunction" space with any observed continuous eigenvalues.

Note the arguments applied is not specific to "position" but applicable to any continuous eigenvalues.
 
  • #89
I am not sure whether this is too much, but I found I can go even further; something is interesting here.

21). I can represent a ket in such a way:
\int \psi(q) | q &gt; dq
This shows that the wavefunction is actually a abbreviated way of this ket.

The eigenvactor of an eigenvalue q_0 can be then written as
\int \delta(q_0) | q &gt; dq .

Or in general, I can extend this into a sample such as a function series { f_n }
in that:

lim_{ n \rightarrow \infty } lim_{ q \rightarrow \q_1 , q_2 } f_n ( q) = \infty
and
lim_{ n \rightarrow \infty } \int_{ - \infty }^q_1 f_n ( q) dq = a_1
lim_{ n \rightarrow \infty } \int_{ - \infty }^q_2 f_n ( q) dq = 1

22). I can even check what shall the inner products of two kets without clear prior definition of inner products:

&lt; \psi_1 | \psi_2 &gt; =
&lt; \int \psi_1(q) |q &gt; dq | \int \psi_2(q \prime) | q \prime &gt; dq \prime &gt; =
\int \overline{\psi_1(q)} &lt; q | \int \psi_2(q \prime) | q \prime&gt; dq \prime &gt; dq =
\int \overline{\psi_1(q)} \int \psi_2(q \prime) &lt; q | q \prime &gt; dq \prime dq =
 
  • #90
Response to posts #79-81

14). ... I have said that using
A = \int a(u) | q &gt; &lt; q | ( u ) du
, we can represent a general representation of "mixed" states.
Now, wait just a moment! How did we get onto the subject of "states" in a decomposition like that of above? Up until now, we have been talking about "observables". ... "Mixed states" will come soon.
______________
... If we intend to standardize the coefficient, the easiest choice will be using q as the standard parameter ... and
A = \int a(q) | q &gt; &lt; q | ( q ) dq
Yes, the easiest choice of "notation" is

A = ∫ a(q) |q><q| dq .

Note, however, that such an operator is merely a function of Q. Specifically, A = a(Q). In other words, the matrix elements of A, in the "generalized" |q>-basis, are given by

<q|A|q'> = a(q) δ(q – q') .

(It turns out that: any linear operator L is a 'function' of Q iff [L,Q] = 0. (This, of course, applies to a spinless particle moving in one dimension.))

BUT ...

In all of this, I am getting the feeling that each of us is misunderstanding what the other means. In the above, if you 'meant' that A is some self-adjoint operator whose spectrum is (simple) continuous, then 'automatically' we can write

[1] A = ∫ a |a><a| da

with no difficulty whatsoever. There is no reason to write it any other way, because by 'hypothesis'

[2] A|a> = a|a> .

The exact analogue of these expressions in the corresponding (nondegenerate) discrete case is

[1'] A = ∑a a |a><a| ,

and

[2'] A|a> = a|a> .

In the discrete case, however, we modify the notation by introducing an index like "n" because somehow 'it is more pleasing to the eye'. But to do an analogous thing in the continuous case is completely uncalled for, since doing so will introduce a new element of "complexity" which provides no advantage whatsoever. ... Why should we write "a" as a function of some parameter "s", say a = w(s), and then have da = w'(s)ds? ... We will get nothing in return for this action except additional "complexity"! (Note that changing the "label" for the generalized ket |a> → |u(a)> introduces no such difficulties.)
______________
16. ... we will write
\sum_n a_n | q_n &gt; &lt; q_n | .
Looking into 15, is there a way we can have both forms of summation and integration coexist.
Yes. Given a self-adjoint operator A, then "the spectrum of A" (i.e. "the set of all eigenvalues ('generalized' or otherwise) of A") can have both discrete and continuous parts. A simple example of such an observable is the Hamiltonian for a finite square-well potential. The "bound states" are discrete (i.e. "quantized" energy levels), whereas the "unbound states" are continuous (i.e. a "continuum" of possible energies).
______________
17) ... For a mixed state, in a discreet case, \sum_n a_n = 1 ; so for a continuous case, I would say \int a(q) dq = 1 is needed.
Hopefully, soon we will be able to talk 'sensibly' about "mixed states". Once we do that, you will see that a 'state' like

ρ = ∫p(q)|q><q|dq (with, of course, p(q) ≥0 (for all q) and ∫ p(q) dq = 1) ,

is not 'physically reasonable'.

So far, we have explained only "pure states", as given by our postulate P1. Recall:
P0: To a quantum system S there corresponds an associated Hilbert space HS.

P1: A pure state of S is represented a ray (i.e. a one-dimensional subspace) of HS.
When we get to discussing "mixed states", we will not explain them in terms of a "postulate", but rather, those objects will be introduced by way of a 'construction' in terms of "pure states". I have already alluded to such a "construction" in post #46 of this thread. There I wrote:
A pure state is represented by a unit vector |φ>, or equivalently, by a density operator ρ = |φ><φ|. In that case, ρ2 = ρ.

Suppose we are unsure whether or not the state is |φ1> or |φ2>, but know enough to say that the state is |φi> with probability pi. Then the corresponding density operator is given by

ρ = p11><φ1| + p22><φ2| .

In that case ρ2 ≠ ρ, and the state is said to be mixed. Note that the two states |φ1> and |φ2> need not be orthogonal (however, if they are parallel (i.e. differ only by a phase factor), then we don't have mixed case but rather a pure case).
______________
______________

Sammy, I am hoping to post a response your posts #84,86,88,89 by Monday. After that I hope to get at least one more postulate out. (There are also (at least) two items from our previous exchanges which I wanted to address.)
 
  • #91
23). In trying to evaluate 22), I found I need something clearer about how to represent all vectors. Let me put all eigenvalues in one real line; for each q in this real line, we associate an eigenvector ^\rightarrow{q} with it. I want to avoid using | q > for now, because | q > is actually a ray. also, remember there are many vectors as c * ^\rightarrow{q} where | c | =1 can be placed here; let's just pick anyone of them.

So, now with a function c(q), we can do a vector integration over the q real line as:
\int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq
Note q in c(q) and dq is just a parameter and ^\rightarrow{q} is a vector, and also viewed a vector function paramterized by q.

24).Refering back to 21), all vectors can be represented now by:
lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty c_n(q) ^\rightarrow{q} dq

25). In particular, let
\delta_n( q - q_0 ) = 1/n for q_0 - 1/2n &lt;= q &lt;= q_0 +1/2n and 0 elsewhere,
the eigenvector for q_0 can be represented as:
lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty \delta_n(q - q_0) ^\rightarrow{q} dq

26). And, for other vectors, c_n(q) can be set to a constant function c(q);
we can verify its consistency with the normal representation:

lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty c_n(q) ^\rightarrow{q} dq =
\int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq =
\int_{ - \infty }^\infty c(q) lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty \delta_n(q \prime -q) ^\rightarrow{q \prime } dq \prime dq =
lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty \int_{ - \infty }^\infty c(q) \delta_n(q \prime - q) ^\rightarrow{q \prime} dq dq \prime =
lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty ( \int_{ - \infty }^\infty c(q) \delta_n(q \prime - q) dq ) ^\rightarrow{q \prime} dq \prime =
lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty c(q \prime) ^\rightarrow{q \prime} dq \prime

27). For inner products of c and d,
( \int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq , \int_{ - \infty }^\infty d(q \prime ) ^\rightarrow{q \prime } dq \prime ) =
\int_{ - \infty }^\infty \overline{d(q \prime)} ( \int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq , ^\rightarrow{q \prime} ) dq \prime

28). Now, I have to discuss what shall it be for
( \int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq , ^\rightarrow{q \prime } )

First, if we look into the inner products of two eigenvectors \rightarrow{q \prime } and \rightarrow{q }, we can first think about what shall be the innerproduct betwen
lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty \delta_n(u - q) ^\rightarrow{u} du
and
lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty \delta_n(v - q \prime) ^\rightarrow{v} dv
.

Comparing it to a discreet case, I guess this could be
| ( q , q \prime) | = lim_{ i \rightarrow \infty } lim_{ j \rightarrow \infty } \int_{ - \infty }^\infty \delta_i( u - q \prime) \delta_i( u - q ) du

So, in general,
( q , q \prime) = e^{ia} lim_{ i \rightarrow \infty } lim_{ j \rightarrow \infty } \int_{ - \infty }^\infty \delta_n( u - q \prime) \delta_n( u - q ) du

The phase factor is put into show the possibility of two out-of-phase eiegnvector. For now, we can assume our standard basis are in-phase vectors.

With this, we can further translate the inside part of 27) to.

( \int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq , ^\rightarrow{q \prime} ) =
\int_{ - \infty }^\infty c(q) ( ^\rightarrow{q} , ^\rightarrow{q \prime} ) dq =
\int_{ - \infty }^\infty c(q) lim_{ i \rightarrow \infty } lim_{ j \rightarrow \infty } \int_{ - \infty }^\infty \delta_n( u - q \prime) \delta_n( u - q ) du dq =
lim_{ i \rightarrow \infty } lim_{ j \rightarrow \infty } \int_{ - \infty }^\infty \delta_i( u - q \prime) \int_{ - \infty }^\infty c(q) \delta_j( u - q ) dq du =
lim_{ i \rightarrow \infty } \int_{ - \infty }^\infty \delta_i( u - q \prime) c(u) du =
c(q \prime)<br /> <br /> Placing that into 27), I got<br /> ( \int_{ - \infty }^\infty c(q) ^\rightarrow{q} dq , \int_{ - \infty }^\infty d(q \prime ) ^\rightarrow{q \prime } dq \prime ) =<br /> ( \int_{ - \infty }^\infty \overline{d(q \prime)} c ( q \prime) dq \prime
 
  • #92
Eye,

Thanks for the reply. It defintely caught my misunderstandings and stimulated my thoughts too.
 
  • #93
Eye,

Now I really know what you were showing me. I definitely went on a different direction. You are showing me that a self-adjoint operator can be decompsed into an integration of its eigenvalues multiplied by its eigenprojectors.

So, Q = \int q | q&gt; &lt; q | dq . Defintely correct.

And can I do this?
Q | \psi &gt; = ( \int q | q&gt; &lt; q | dq ) | \int \psi(q) |q &gt; dq = \int q \psi(q)| q &gt; dq
By the above EQ., if we see \psi(q) representing | \psi &gt; , then
Q | \psi &gt; = q * \psi ( q ) = q * | \psi &gt;.

This is of course due to the \psi ( q ) is the coeffient when choosing the eigenvectors of Q as basis.

If the energy eigenvectors are chosen as the basis, then we can write
H | \psi &gt; = E * | \psi &gt;
, because Hamiltonian's eigenvalues are energies.

While I use
| \psi &gt; = \int c(q) |q &gt; dq
because I treat q as a paramter here, but I think I saw another notation in this way
| \psi &gt; = \int c(q) d |q &gt;
Do you have any comments on that?
 
Last edited:
  • #94
Just make some conclusions on my deduction:

I. After including the phase factor consideration, | q_0 &gt; as the normal eigenvector of the eigenvalue q_0 can be denoted as:

lim_{ n \rightarrow \infty } \int_{ - \infty }^\infty \delta_n(q - q_0) e^{ik(q)} | q&gt; dq
, or
\int_{ - \infty }^\infty \delta(q - q_0) e^{ik_0} | q&gt; dq

This can be checked against that it shall be representable as
\sum_n c_n \psi_n =
\sum_n c_n \int_{ - \infty }^\infty \psi_n( q ) | q&gt; dq
where \psi_n is the normal eigenvector of Hamiltonian, because this infinite summation can be viewed the limit of a function sequence ( To use correct math. term, I think I probably shall say sequence instead of series , series is reserved for infinite summation. correct ? ) as well.

II. If we do not simplify the 3-dim eigenvalues observed into the discussion of anyone of them, then we will find out the three measurables form a 3-dim vector, and we will need to think about what is a "vector operator" or a "vector observable".

We will have more to explore such as, what does the rotation of the vector operator mean, what shall two vector operators' inner or scalor product and their outer product be.
 
  • #95
Responses to posts #84,86,88,89

About the object "|q><q|" which you referred to as a "point projector". Do you realize that "|q><q|" is not a "projector"? ... A necessary condition for an object P to be a projector is P2 = P. But

(|q><q|)2 = |q><q| δ(0) = ∞ .

On the other hand, PI defined by

[PIψ](x) ≡
ψ(x) , x Є I
0 , otherwise ,

or equivalently,

PI ≡ ∫I |q><q| dq ,

does satisfy PI2 = PI.
______________

According to the "Born rule", the probability for finding the particle in the interval I is given by

P(I) = ∫I ψ*(q)ψ(q) dq .

But ψ(q) ≡ <q|ψ>, so that

P(I) = ∫I <ψ|q><q|ψ> dq

= <ψ| { ∫I|q><q|dq } |ψ>

= <ψ|PI|ψ> .

The last expression corresponds to (ψ, PIψ) in the notation of "functional analysis".

Apart from some 'notational difficulties', you have performed this verification correctly:
P(I) = \int_a^b (\psi, q) (q ,\psi) dq
Take P_I \psi = \int_a^b (\psi, q) | q &gt; dq ,
(\psi, P_I \psi ) = ( \psi, \int_a^b ( \psi, q ) | q &gt; dq ) =
\int_a^b \overline{( \psi, q )} ( \psi , q) dq =
\int_a^b ( q , \psi ) ( \psi , q) dq
____

As for what you say regarding a discrete case:
P(I) = \sum_n (\psi, q_n) (q_n , \psi)
Take P_I \psi = \sum_n (\psi, q_n) \q_n ,
(\psi, P_I \psi ) = \sum_n \overline{( \psi, q_n )} ( \psi , q_n) =
\sum_n ( q_n , \psi ) ( \psi , q_n)
... the picture you are presenting of a "discrete" position observable in terms of "discrete" eigenkets does not make.

To make the position observable "discrete" we want to have a small "interval" In corresponding to each "point" qn, say

qn = n∙∆q , and In = (qn-∆q/2 , qn+∆q/2] .

Then, our "discrete" position observable, call it Q∆q, will be a degenerate observable. It will have eigenvalues qn and corresponding eigenprojectors (not kets!) PI_n. That is,

Q∆q = ∑n qnPI_n .
______________
21). I can represent a ket in such a way:
\int \psi(q) | q &gt; dq
This shows that the wavefunction is actually a abbreviated way of this ket.
Yes. In general,

[1] |ψ> = ∫ ψ(q) |q> dq ,

which is equivalent to

[2] ψ(q) = <q|ψ> .

Thus, ψ(q) is the "component" of |ψ> in the ("generalized") |q>-basis.

Relative to an 'ordinary' basis we have two similar expressions:

[1'] |ψ> = ∑n cnn> ,

[2'] cn = <φn|ψ> .

The requirement that |ψ> Є H, i.e. <ψ|ψ> < ∞ , in terms of [1] (and [2]) means

∫ |ψ(q)|2dq < ∞ ,

whereas in terms of [1'] (and [2']) it means

n|cn|2 < ∞ .

Everything is the 'same' in both cases, except for the fact that <q|q> = ∞ , whereas <φnn> = 1. That is, each |q> is not a member of H, whereas each |φn> is. Indeed, as you say:
The eigenvactor of an eigenvalue q_0 can be then written as
\int \delta(q_0) | q &gt; dq .
... and <q|q> = ∫|δ(qo)|2dqo = δ(0) = ∞ .
______________
Any way, I have gone thru an exercise showing me that I can construct a "wavefunction" space with any observed continuous eigenvalues.
Yes. The position observable Q is used the most frequently for this purpose. The next most frequently used is the momentum observable P.
______________
22). I can even check what shall the inner products of two kets without clear prior definition of inner products:

&lt; \psi_1 | \psi_2 &gt; =
&lt; \int \psi_1(q) |q &gt; dq | \int \psi_2(q \prime) | q \prime &gt; dq \prime &gt; =
\int \overline{\psi_1(q)} &lt; q | \int \psi_2(q \prime) | q \prime&gt; dq \prime &gt; dq =
\int \overline{\psi_1(q)} \int \psi_2(q \prime) &lt; q | q \prime &gt; dq \prime dq =
In the above, the internal consistency of our formulation is brought out as soon as we write

<q|q'> = δ(q - q') .

Then, the last integral becomes

∫ψ1*(q) ∫ψ2(q') δ(q - q') dq' dq

= ∫ ψ1*(q) ψ2(q) dq ,

which is just what we want for <ψ12>.
______________
 
  • #96
Response to post #93

(Note: I am deferring a response to post #91 until later.)

Sammywu said:
Now I really know what you were showing me. I definitely went on a different direction. You are showing me that a self-adjoint operator can be decompsed into an integration of its eigenvalues multiplied by its eigenprojectors.

So, Q = \int q | q&gt; &lt; q | dq . Defintely correct.
Yes ... when Q is a self-adjoint operator with pure continuous (nondegenerate) spectrum.
____________
And can I do this?
Q | \psi &gt; = ( \int q | q&gt; &lt; q | dq ) | \int \psi(q) |q &gt; dq = \int q \psi(q)| q &gt; dq
Yes, but use distinct integration variables in each of the integrals, say q in the first and q' in the second, so you can then show the 'computation' explicitly, like this:

Q|ψ>

= (∫ q|q><q| dq) (∫ ψ(q')|q'> dq')

= ∫dq q|q> ∫ψ(q')<q|q'> dq'

= ∫dq q|q> ∫ψ(q') δ(q - q') dq'

= ∫ qψ(q)|q> dq [E1] .
____________
By the above EQ., if we see \psi(q) representing | \psi &gt; , then
Q | \psi &gt; = q * \psi ( q ) = q * | \psi &gt;.
No. The relation Q|ψ> = q|ψ> would mean that |ψ> is an eigenket of Q, something you do not wish imply. In words, what you want to express is this: "the action of Q on |ψ> when depicted in the q-space of functions is multiplication by q".

That is easy to do. Given any ket |φ>, then it's q-space representation is just <q|φ>, which we write as φ(q). Now, we want |φ> = Q|ψ> in q-representation, which is therefore just

<q|(Q|ψ>) = <q| (∫ q'ψ(q')|q'> dq') , using [E1] above

= ∫ q'ψ(q') <q |q'> dq'

= ∫ q'ψ(q') δ(q - q') dq'

= qψ(q) .

Alternatively, from Q|q> = q|q>, we have (Q|q>) = (q|q>), which becomes <q|Q = <q|q*. But Q = Q and q* = q, so <q|Q = q<q|. Therefore,

<q|Q|ψ> = q<q|ψ> = qψ(q) .

This is of course due to the \psi ( q ) is the coeffient when choosing the eigenvectors of Q as basis.
Yes,

ψ(q) is just the q-component of |ψ> in the generalized |q>-basis.
____________

Compare this last statement with the case of a (non-"generalized") discrete basis.

In a discrete basis |φn>, what is the φn-representation of |ψ>? ... It is just <φn|ψ>. And if we write |ψ> = ∑n cnn>, we then have <φn|ψ> = cn. So,

cn is just the n-component of |ψ> in the |φn>-basis.

... In the discrete case, this is 'obvious'. The continuous case should now be 'obvious' too.

Perhaps a 'connection' to "matrices" may offer further insight. So here we go!
____________

Note that, in what follows, no assumption is made concerning the existence of an "inner product" on the vector space in question. It is therefore quite general. (Note: I am just 'cutting and pasting' from an old post.)
_____

Let bi be a basis. Then, (using the "summation convention" for repeated indices) any vector v can be written as

v = vibi .

In this way, we can think the of vi as the components of a column matrix v which represents v in the bi basis. For example, in particular, the vector bk relative to its own basis is represented by a column matrix which has a 1 in the kth position and 0's everywhere else.

Now, let L be a linear operator. Let L act on one of the basis vectors bj; the result is another vector in the space which itself is a linear combination of the bi's. That is, for each bj, we have

[1] Lbj = Lijbi .

In a moment, we shall see that this definition of the "components" Lij is precisely what we need to define the matrix L corresponding to L in the bi basis.

Let us apply L to an arbitrary vector v = vjbj, and let the result be
w = wibi. We then have

wibi

= w

= Lv

= L(vjbj)

= vj(Lbj)

= vj(Lijbi) ... (from [1])

= (Lijvj)bi .

If we compare the first and last lines of this sequence of equalities, we are forced to conclude that

[2] wi = Lijvj ,

where, Lij was, of course, given by [1].

Now, relation [2] is precisely what we want for the component form of a matrix equation

w = L v .

We, therefore, conclude that [1] is the correct "rule" for giving us the matrix representation of a linear operator L relative to a basis bi.
_____

The above description of components is quite general. It relies on the following two "facts" concerning a "basis" bi:

(i) any vector can be written as a linear combination of the bi,

(ii) the coefficients in such a linear combination are unique.

Now, here is an exercise:

Draw the 'connection' between what was just described above to that of our Hilbert space.

Your answer should be short and straight 'to the point'. To show you what I mean, I will get you started:

bi = |bi>

v = |v>

vi = <bi|v>

etc ...
___

... What about a continuous basis, say |q>?
____________

Now getting back to your post:
If the energy eigenvectors are chosen as the basis, then we can write
H | \psi &gt; = E * | \psi &gt;
, because Hamiltonian's eigenvalues are energies.
This is the same mistake you made above with "Q|ψ> = q|ψ>", which you now know is wrong ... right?
____________
While I use
| \psi &gt; = \int c(q) |q &gt; dq
because I treat q as a paramter here, but I think I saw another notation in this way
| \psi &gt; = \int c(q) d |q &gt;
Do you have any comments on that?
The object "|q>" in each formula is obviously not the same.

In the first formula, we have "|q>dq" in an integral which produces a vector of the Hilbert space (provided that ∫|c(q)|2dq < ∞). The interpretation of "|q>" is, therefore, that of a "vector density" in the Hilbert space, while "dq" is the associated "measure". Their product, "|q>dq", then has the interpretation of an "infinitesimal vector".

In the second formula, we see "d|q>". Its interpretation is that of an "infinitesimal vector". I will change the notation to avoid confusion and write "d|q>" as "d|q)". An appropriate definition of "|q)" in terms of the usual "|q>" is then

|q) = ∫-∞q |q'> dq' .

Thus, |q) is also in a class of "generalized vector". If we now take |q) as the "given", then from it we can define |q> ≡ d|q)/dq.

From the perspective of any calculation I have ever performed in quantum mechanics, the "|q>" notation of Dirac is superior.
 
Last edited:
  • #97
Eye,

I am still digesting your response. So it's going to take me a while to answer that exercise.

Just respond to some points you made:

1) I did not know | q > < q | is not a projector. I have to think about that.

2). I did hesiate to write Q | \psi &gt; = q | \psi &gt; in the same reasons you mentioned, but in both Leon's Ebook and another place I did see their mentioning about the Q's defintion is Q | \psi &gt; = q | \psi &gt;. Just as I mentioned, the only reason I could see this "make sense" is by either
Q | \psi &gt; = \int q |q &gt; &lt; q&gt; dq or
Q | \psi &gt; = q \psi ( q ) in the form of wavefunvtions.

3). I think your defining that \psi ( q ) = &lt; q \ \psi &gt; actually will make many calculations I did in showing in general
&lt; \psi \prime | \psi &gt; = \int \overline{\psi \prime ( q ) } \psi ( q ) dq
much more straighforward thamn my cumbersome calculations.

But one problem is then what is < q | q >. The discrete answer will be it needs to be one. Which of course will lead to some kind of conflicts in a general requirement of
\int \overline{\psi ( q ) } \psi ( q ) dq = 1 .

This is probably related to the eigenprojector P_{I_n} you mentioned.

4). Actually, I noticed my deduction has a contradition unresolved.

There is an issue to be resolved in my eigenfunction for "position" q_0 as
lim_n { \rightarrow \infty } \int \delta_n(q - q_0) | q&gt; dq
. The problem here is whether the norm of \delta_n( q - q_0 ) shall be one or its direct integration shall be one.
If the norm shall be one, then it shall be altered to be its square root then.
 
  • #98
Eye,

Answer to the exercise:

What you show here is a vector space with a basis, and the Hilbert space is a vector space with inner product, so I think what behind here is how to establish the relationship between an arbitrary basis and an inner product.

I). Discrete case:

I.1) From an arbitray basis to an inner product:

For two vector v and w written as [
tex] v = \sum_i b_i [/tex] and w = \sum_i b_j
with any basis b_i, we can define an inner product as ( v , b_i ) = v_i and we can deduct from there that
( v , w ) = \sum_i v_i \overline{w_i}.
This inner product will satisfy all condition required for an inner product and { b_i } becomes an orthonormal basis automatically.

If
b_i \prime = L b_i = \sum_j L_{ij} b_j
transforms a basis b_i to b_i \prime and b_i \prime happens to be orthonormal in the inner product we defined, L shall be an unitary transformation. ( I haven't proved this yet, but I think this shall be right ).

I.2) From an inner product to a basis:

Let any two \psi_1 , \psi_2 \in H, set
b_1 = \psi_1 \div ( \psi_1, \psi_1 ).
Set
\psi_2 \prime = \psi_2 - ( \psi_2 , b_1 ) b_1
.

If \psi_2 \prime is not zero, then set
b_2 = \psi_2 \prime \div ( \psi_2 \prime , \psi_2 \prime ).
I can establish an orthonormal basis { b_1 , b_2 } for the space spanned by \psi_1 , \psi_2 .

Taking in a \psi_3 with
\psi_3 \prime = \psi_3 - ( \psi_3 , b_1 ) b_1 - ( \psi_3 , b_2 ) b_2
not zero, we can set
b_3 = \psi_3 \prime \div ( \psi_3 \prime , \psi_3 \prime )
and span the space even larger.

Continuing this process, we can establish an orthonormal basis as long as the Hilbert space has finite or infinite but countable dimension.

Does separability contribute to ensure its countability?
 
  • #99
II) For a continuous spectrum:

II.1) From any basis to an inner product:

For two vector v and w written as v = \int v(q) | q&gt; dq and w = \int w(q) | q &gt; dq with any continuous vector density basis | q &gt;, we can define an inner product as ( v , w ) = \int v(q) \overline{w(q)} dq. This inner product will satisfy all condition required for an inner product and { | q &gt; } shall be a generalized orthonormal basis automatically.
( I need to work out the detail of ( v , |q> ) later ).

If | p &gt; = L | q &gt; = \int L(p,q) | q &gt; dq transforms a basis | q &gt; to | p &gt; and | p &gt; happens to be orthonormal in the inner product we defined, L shall be an unitary transformation. ( Again, pending detail proof. )

If L is unitary, then
| q &gt; = \overline{L^T} | p &gt; = \int \overline{L(q,p)} | p &gt; dp
So for v = \int v(q) | q&gt; dq, v can be transformed to
v = \int v(q) \int \overline{L(q,p)} | p &gt; dp dq =
\int \int v(q) \overline{L(q,p)} dq | p &gt; dp

So
\int v(q) \overline{L(q,p)} dq
become the coefficient representing in |p> .

I.2) From an inner product to a basis:

The process of this part is almost exactly the same as the discrete one.
I need to figure out how separability contribute to ensure its countability.
 
  • #100
Addentum to II.1).

When dealing with ( \psi , q) = &lt; q | \psi &gt;, there shall be an extra care because |q> could be representing two different things here.

Inside the integral
\int \psi(q) | q &gt; dq
, it's a " vector density".

And we aslo use it to denote the eigenvector or eigenket of "position", in this case it's a normal vector not a "vector density".

Strictly speaking, for
\psi (q) = ( \psi, q ) = &lt; q | \psi &gt;
, if |q> is a "vector density" here, then it's not an inner product but rather an "inner product density".

But with this in mind, I am able to write the eigenket as
| q &gt; = \int \delta(q \prime - q ) |q \prime &gt; dq \prime
, or more precisely if considering phase factors,
| q &gt; = \int \delta(q \prime - q ) e^{ik(q \prime)} | q \prime &gt;d q \prime =

lim_{n \rightarrow \infty} \int \frac{e^{- \frac{ n^2 ( q \prime - q)^2 }{2} }}{ \pi^{ \frac{1}{4} } \frac{1}{n} } e^{ik(q \prime)} | q \prime &gt; d q \prime
.

Here I think using Gausian wave function as the approximate function sequence could be the best. And I have chosen a factor in such a way that
their norms could be one always.

Any way, with this we can say that the "inner product density" of an eigenket |q> and the vector density | q \prime &gt; of the position operator is,
&lt; q \prime | q &gt; = \delta ( q \prime - q ) e^{ik(q)} e^{-ik(q \prime)}

And the "inner product" of two eigenkets |q> and | q \prime &gt; shall be
\int \delta ( q \prime \prime - q \prime ) \delta ( q \prime \prime - q ) e^{ik ( q \prime \prime)} e^{-ik \prime (q \prime \prime)} dq \prime \prime

I will see whether I prove that they will be the same value any way?

And the "inner product" of an eigenket |q> and any ket \psi shall be
&lt; \psi | q &gt; = \int \delta ( q \prime - q ) \overline{\psi( q \prime )} e^{ik(q \prime )} dq \prime
 
Last edited:

Similar threads

Back
Top