Understanding Change of Basis & Superpositioning of States

In summary, the role of change of basis in superpositioning of states is to allow for the representation of different observables in different bases. This is done through the use of unitary transformations, such as rotation operators. However, it is important to note that the physical outcomes of quantum mechanics are independent of the chosen basis, as observables can be measured on a system regardless of the state it is prepared in.
  • #1
mike1000
271
20
I think I do not quite understand the role that change of basis plays is superpositioning of states.

If there is an observable, ##A## which is represented by the operator ##\hat{A}##, then the set of observed values for that observable will be the set of eigenvalues defined by the operator ##\hat{A}## and the set of states that the observable can be measured in are the corresponding eigenvectors for the set of eigenvalues of the matrix operator, ##\hat{A}##.

Lets pretend we have the matrix operator ##\hat{A}## for some observable. However, before we determine its eigenvalues and eigenvectors we apply a rotation operator, ##\hat{R}## , of some type to it. (I do not know if this makes any sense to do in QM but I know I can do it from a purely linear algebra point of view.)

After I apply the rotation operator to ##\hat{A}## we have the new operator \begin{equation}\hat{B}=\hat{R}\hat{A}\end{equation}Now we determine the eigenvalues and eigenvectors of the ##B## observable.

What is the relationship between the eigenvectors of the ##A## observable and the eigenvectors for the ##B## observable? Will each eigenvector in ##A## be some linear combination(ie superposition) of the eigenvectors in ##B##? Are the calculated probabilities of measuring the same event going to be different when measured relative to the two different basis? WIll some states which were observable in the ##A## basis not be observable in the ##B## basis?
 
Last edited:
Physics news on Phys.org
  • #2
mike1000 said:
After I apply the rotation operator to ##\hat{A}## we have the new operator \begin{equation}\hat{B}=\hat{R}\hat{A}\end{equation}Now we determine the eigenvalues and eigenvectors of the ##B## observable.
You don't apply unitary transformation, such as rotation, that way. When you rotate the system, you rotate the state as well. Therefore the final state after rotation is ##|\psi_f \rangle = \hat R |\psi_i\rangle##. Measuring the observable ##A## in the rotated system is represented by ##\langle \psi_i | \hat R ^{-1} A \hat R|\psi_i\rangle##, which is equivalent to measuring an observable ##\hat R ^{-1} A \hat R##, this is the proper form of the rotated observable. Note that it is also Hermitian whereas ##\hat R A## is not.
 
  • Like
Likes mike1000
  • #3
mike1000 said:
Are the calculated probabilities of measuring the same event going to be different when measured relative to the two different basis?
"measured relative to the two different basis" is not the standard way, if not wrong, that people normally say in QM. You are measuring an observable with respect to a state and a state need not be a basis state. However, if you mean that the matrix representation which is represented in different bases and subsequently measured relative to the same state, then it would be no. The expectation value of the observable will not change when only the matrix is represented in different basis.
mike1000 said:
WIll some states which were observable in the AA basis not be observable in the BB basis?
An observable will stay observable forever.
 
  • #4
blue_leaf77 said:
You don't apply unitary transformation, such as rotation, that way. When you rotate the system, you rotate the state as well. Therefore the final state after rotation is ##|\psi_f \rangle = \hat R |\psi_i\rangle##. Measuring the observable ##A## in the rotated system is represented by ##\langle \psi_i | \hat R ^{-1} A \hat R|\psi_i\rangle##, which is equivalent to measuring an observable ##\hat R ^{-1} A \hat R##, this is the proper form of the rotated observable. Note that it is also Hermitian whereas ##\hat R A## is not.

Thank you for this. In this one simple reply you drove home the importance ot Hermitian matrices and unitary transformations and incidentally, the importance of symmetry and its relationship to real eigenvalues.
 
Last edited:
  • #5
blue_leaf77 said:
"measured relative to the two different basis" is not the standard way, if not wrong, that people normally say in QM. You are measuring an observable with respect to a state and a state need not be a basis state. However, if you mean that the matrix representation which is represented in different bases and subsequently measured relative to the same state, then it would be no. The expectation value of the observable will not change when only the matrix is represented in different basis.

An observable will stay observable forever.

What I meant by "measured in two different basis" was the following... if we change the basis, vectors which were orthogonal in the first basis may not be orthogonal in the second basis. This suggest to me that the eigenvectors of the ##A## operator would be linear combinations of the the eigenvectors of the ##B## operator which, I now know, is ##B=R^{-1}AR##.

Is this correct?
 
Last edited:
  • #6
This is a misconception! An observable can be measured on the system, if it is defined on this system, i.e., if there is a measurement which allows you to measure this observable. You can thus measure the observable, no matter in which state the system is prepared. If it is prepared in a state, described by a statistical operator ##\hat{\rho}##, the probability to measure the eigenvalue ##a## of the operator ##\hat{A}## that represents the observable ##A##, is given by
$$P(a)=\sum_{\beta} \langle a,\beta | \hat{\rho}|a, \beta \rangle,$$
where ##|a,\beta \rangle## is a complete set of orthornormalized eigenvectors of ##\hat{A}## for the eigenvalue ##a##.

All the physical outcomes of QT are of course independent of the basis you've chosen to describe your observables. It's as with usual vectors in 3D Euclidean space: A vector is some directed quantity, e.g., the position vector is the directed connection from the origin of your reference frame to the position of the particle. It doesn't depend on the basis chosen to describe this "arrow" in terms of three real numbers.
 
  • #7
vanhees71 said:
This is a misconception! An observable can be measured on the system, if it is defined on this system, i.e., if there is a measurement which allows you to measure this observable. You can thus measure the observable, no matter in which state the system is prepared. If it is prepared in a state, described by a statistical operator ##\hat{\rho}##, the probability to measure the eigenvalue ##a## of the operator ##\hat{A}## that represents the observable ##A##, is given by
$$P(a)=\sum_{\beta} \langle a,\beta | \hat{\rho}|a, \beta \rangle,$$
where ##|a,\beta \rangle## is a complete set of orthornormalized eigenvectors of ##\hat{A}## for the eigenvalue ##a##.

All the physical outcomes of QT are of course independent of the basis you've chosen to describe your observables. It's as with usual vectors in 3D Euclidean space: A vector is some directed quantity, e.g., the position vector is the directed connection from the origin of your reference frame to the position of the particle. It doesn't depend on the basis chosen to describe this "arrow" in terms of three real numbers.

I hope I understand correctly what you are saying.

A state vector ##|\psi\rangle## defines a point in Hilbert Space. The state vector has components given by the projection of that point onto some orthonormal basis. For operator ##A## the orthonormal basis are the eigenvectors of ##\hat{A}##. The length squared of the state vector (which is the distance of the point from the origin) is given by the inner product of the vector with itself, \begin{equation}L^2=\langle\psi|\psi\rangle\end{equation} When we change the coordinate system, which I am calling a change of basis, by some unitary transformation (ie a transformation which preserves the inner product) then the same point will be described by a new state vector in the new basis as ##|\phi\rangle##. However the length of the vector will be invariant under the transformation, which implies that \begin{equation}L^2 = \langle\psi|\psi\rangle=\langle\phi|\phi\rangle\end{equation}Which means that I should calculate the same probabilities for the same observable regardless of the basis in which the state vectors are described.

I know I am probably not completely correct, but I think I am getting close.
 
Last edited:
  • #8
mike1000 said:
I hope I understand correctly what you are saying.

A state vector ##|\psi\rangle## defines a point in Hilbert Space. The state vector has components given by the projection of that point onto some orthonormal basis. For operator ##A## the orthonormal basis are the eigenvectors of ##\hat{A}##. The length squared of the state vector (which is the distance of the point from the origin) is given by the inner product of the vector with itself, \begin{equation}L^2=\langle\psi|\psi\rangle\end{equation} When we change the coordinate system, which I am calling a change of basis, by some unitary transformation (ie a transformation which preserves the inner product) then the same point will be described by a new state vector in the new basis as ##|\phi\rangle##. However the length of the vector will be invariant under the transformation, which implies that \begin{equation}L^2 = \langle\psi|\psi\rangle=\langle\phi|\phi\rangle\end{equation}Which means that I should calculate the same probabilities for the same observable regardless of the basis in which the state vectors are described.

I know I am probably not completely correct, but I think I am getting close.
Your ##\psi## and ##\phi## are the same state and, if properly normalised, ##L=1##. But the state descriptions differ between bases and you express this by projection into either basis:

##|\psi> = \sum_i |\phi^A_i> <\phi^A_i|\psi> = \sum_j |\phi^B_j> <\phi^B_j|\psi>##

where ##\phi^A_i## are the eigenstates in A and similarly for B.

In particular, if ##\psi = \phi^A_k## is the ##k^{th}## eigenstate in basis A, then it is, in general, a superposition in B:

##|\phi^A_k> = \sum_j |\phi^B_j> <\phi^B_j|\phi^A_k>##
 
  • #9
mikeyork said:
Your ##\psi## and ##\phi## are the same state and, if properly normalised, ##L=1##. But the state descriptions differ between bases and you express this by projection into either basis:

You are confusing me.

When you say that "your ##\psi## and ##\phi## are the same state" what do you mean? I am pretty sure they have different elements. So why do you say they are the same state?

Please try to keep it simple.
 
  • #10
mike1000 said:
You are confusing me.

When you say that "your ##\psi## and ##\phi## are the same state" what do you mean? I am pretty sure they have different elements. So why do you say they are the same state?

Please try to keep it simple.
I have kept it as simple as possible and you are confusing yourself with notions like "I am pretty sure they have different elements" without apparently knowing what your "elements" are (I don't either). It is the observables that have different eigenvalues.
 
  • #11
mike1000 said:
When you say that "your ψ\psi and ϕ\phi are the same state" what do you mean? I am pretty sure they have different elements. So why do you say they are the same state?

I live in a town whose street grid is laid out with avenues running from southwest to northeast, with the cross streets running from southeast to northwest. If I am walking uptown with speed ##v##, I can write my velocity vector as ##(v,0)## in the uptown/crosstown basis or as ##(v/\sqrt{2},v/\sqrt{2})## in the compass basis. Different elements, but same vector with the same physical significance... and the same general idea works for vectors in Hilbert space, which is what the states are.
 
  • Like
Likes dextercioby
  • #12
mikeyork said:
I have kept it as simple as possible and you are confusing yourself with notions like "I am pretty sure they have different elements" without apparently knowing what your "elements" are (I don't either). It is the observables that have different eigenvalues.

No, I am not confusing myself. The components of the state vectors are the projection of the point on each of the basis vectors. If you change the basis vectors the projection of the same point on the new set of base vectors will be different, ie two state vectors with different elements. That is how I view it currently.
 
  • #13
Nugatory said:
I live in a town whose street grid is laid out with avenues running from southwest to northeast, with the cross streets running from southeast to northwest. If I am walking uptown with speed ##v##, I can write my velocity vector as ##(v,0)## in the uptown/crosstown basis or as ##(v/\sqrt{2},v/\sqrt{2})## in the compass basis. Different elements, but same vector with the same physical significance... and the same general idea works for vectors in Hilbert space, which is what the states are.

The way you describe it I agree with. Its the same point described in two different bases, meaning two different state vectors.

I did get something from mikeyorks description. The point in Hilbert Space is the state.
 
  • #14
mike1000 said:
Its the same point described in two different bases.
Please make up your mind about whether you are talking about "points" or states, "elements" or eigenstates. Once you get that clear you will understand that when you say "the same point" you mean the same state because QM is about states and state vectors, not "points" or "elements". It is only the descriptions of the state in terms of the eigenstates that differ and I fully explained this in post #8.
 
  • #15
mikeyork said:
Please make up your mind about whether you are talking about "points" or states, "elements" or eigenstates. Once you get that clear you will understand that when you say "the same point" you mean the same state because QM is about states and state vectors, not "points" or "elements". It is only the descriptions of the state in terms of the eigenstates that differ and I fully explained this in post #8.

Well, I could say the same to you because a state is a point in Hilbert Space! When I say element I mean element of a vector. As far as this statement goes,
It is only the descriptions of the state in terms of the eigenstates that differ
I think that was the entire point of my post #7
 
  • #16
mike1000 said:
Well, I could say the same to you because a state is a point in Hilbert Space!
No. A state is represented (not "is") by a vector in Hilbert space.
When I say element I mean element of a vector. As far as this statement goes, I think that was the entire point of my post #7
The vectors ##|\psi>## and ##|\phi>## in your #7 are the same vector. Since each vector represents a unique state, that means ##\psi## and ##\phi## are the same state. That is one of your areas of confusion, since you insisted they are different states.

I have tried to help you, but it looks to me like you insist on remaining confused between states and their description/expression in a particular basis. Good luck with that.
 
  • #17
mikeyork said:
No. A state is represented (not "is") by a vector in Hilbert space.

The vectors ##|\psi>## and ##|\phi>## in your #7 are the same vector. Since each vector represents a unique state, that means ##\psi## and ##\phi## are the same state. That is one of your areas of confusion, since you insisted they are different states.

I have tried to help you, but it looks to me like you insist on remaining confused between states and their description/expression in a particular basis. Good luck with that.

The vector defines a point in Hilbert Space. It is that point that is invariant under a unitary transformation. If what you said is true then when I change the basis I must change the state, but you told me before that when I change the basis I do not change the state. I am referring to post #8 where you said my ##\psi## and ##\phi## are the same state.

When you say the same vector, that implies to me that corresponding elements of the vectors are equal. That is not true. What is true is that the inner products of each vector with itself are equal. (ie the length of the two vectors is the same)
 
Last edited:
  • #18
When you say the same vector that implies to me that corresponding elements of the vectors are equal. That is not true.
That is just another example of your confusion. Can someone else help this guy? I'm giving up.
 
  • #19
mikeyork said:
That is just another example of your confusion. Can someone else help this guy? I'm giving up.

So you are agreeing that the corresponding elements of the two vectors are not equal?

I don't think you were trying to help me.
 
  • #20
mike1000 said:
So you are agreeing that the corresponding elements of the two vectors are not equal?
The projections of the state vector onto a basis vector are not equal for different bases -- as is evident from my post #8. If you insist on using your own private language, rather than the accepted language of QM, you will remain confused about this and no doubt carry on claiming that a state vector can represent two different states.
If I look at an object in fron of me and choose my z-axis as the direction that points to it, then I will have defined its projection onto the z-axis. If I turn my head to the left, but keep my z-axis pointing straight ahead of me then the projection of the object onto the z-axis will have changed, but the object is in exactly the same position (has the same position vector, is the same state) as before but is expressed in a different frame (basis).

I don't think you were trying to help me.
That is why I'm going to stop trying.
 
Last edited:
  • #21
mikeyork said:
The projections of the state vector onto the basis vectors are not equal as is evident from my post #8. If you insist on using your own private language, rather than the accepted language of QM, you will remain confused about this and no doubt carry on claiming that a state vector can represent two different states..

That is why I'm going to stop trying.

I do not think I ever claimed that a state vector can represent two different states.

I am not insisting on using my own private language. I will get the language right I have no doubt about that.
 
  • #22
mikeyork said:
The projections of the state vector onto a basis vector are not equal for different bases -- as is evident from my post #8. If you insist on using your own private language, rather than the accepted language of QM, you will remain confused about this and no doubt carry on claiming that a state vector can represent two different states.
If I look at an object in fron of me and choose my z-axis as the direction that points to it, then I will have defined its projection onto the z-axis. If I turn my head to the left, but keep my z-axis pointing straight ahead of me then the projection of the object onto the z-axis will have changed, but the object is in exactly the same position (has the same position vector, is the same state) as before but is expressed in a different frame (basis).That is why I'm going to stop trying.

I know the object is in the exact same position. I have been saying all along that the "point" is invariant under the transformation. Are the eigenvectors of the operator in the two different coordinate systems the same?
 
  • #23
mike1000 said:
I do not think I ever claimed that a state vector can represent two different states.
In post #9, you wrote
When you say that "your ##\psi## and ##\phi## are the same state" what do you mean? I am pretty sure they have different elements. So why do you say they are the same state?
mike1000 said:
I know the object is in the exact same position. I have been saying all along that the "point" is invariant under the transformation.
And again you are evading the fact that the same point describes the same vector and the same state. Turning my head doesn't change the state it just changes the frame of reference.
Are the eigenvectors of the operator in the two different coordinate systems the same?
No but the state is the same. Just because you change the representation (basis, co-ordinate frame, or whatever) it does not mean you have changed the state.
 
  • #24
mike1000 said:
Are the eigenvectors of the operator in the two different coordinate system the same?
Yes. (The eigenvectors are vectors, so that answer could reasonably have been "yes, of course").
Of course the components are different in different coordinate systems, but the vectors are the same. Another example from my diagonal street grid: The eigenvectors of ##\hat{N}##, the "north" operator, are ##(1,0)## and ##(-1,0)## in the compass basis and ##(\sqrt{2},\sqrt{2})## and ##(-\sqrt{2},-\sqrt{2})## in the avenue/street basis - one points due north and one points due south, no matter the choice of basis.
 
  • #25
Nugatory said:
Yes. (The eigenvectors are vectors, so that answer could reasonably have been "yes, of course").
Of course the components are different in different coordinate systems, but the vectors are the same. Another example from my diagonal street grid: The eigenvectors of ##\hat{N}##, the "north" operator, are ##(1,0)## and ##(-1,0)## in the compass basis and ##(\sqrt{2},\sqrt{2})## and ##(-\sqrt{2},-\sqrt{2})## in the avenue/street basis - one points due north and one points due south, no matter the choice of basis.

Of course I know that the head and tail of the vector are the same. I have said all along that the point does not move. And, also the origin does not move. The vector from the origin to the point remains the same under the transformation.

Some how, I was attaching significance to the values of the individual components of the state vectors. Isn't the value of the individual components interpreted as the probability amplitude? (I know it is the projection of the observable operator onto the corresponding basis vector) but it is interpreted as a probability amplitude in that direction.
 
  • #26
mike1000 said:
if we change the basis, vectors which were orthogonal in the first basis may not be orthogonal in the second basis.

This is not correct. Orthogonality of vectors is basis independent.

mike1000 said:
When we change the coordinate system, which I am calling a change of basis, by some unitary transformation (ie a transformation which preserves the inner product) then the same point will be described by a new state vector in the new basis as ##|\phi\rangle##.

This is not correct either. A change of basis doesn't change vectors; it just changes their representation. You are confusing a vector--an abstract object in a vector space--with its components--a set of numbers that represent the vector when a particular basis is chosen.

However, "unitary transformation" is a more general notion than "change of basis". A general unitary transformation can be viewed as changing vectors.

mike1000 said:
the length of the vector will be invariant under the transformation

This is correct (and is true of any unitary transformation).
 
  • #27
mike1000 said:
Some how, I was attaching significance to the values of the individual components of the state vectors. Isn't the value of the individual components interpreted as the probability amplitude?
Of course, but changing the basis changes neither the vector nor the physical predictions we make when we operate on the vector. One form may be more convenient for calculation than another (for example, in my street grid you would write my velocity vector in the compass basis if you wanted to know how far north I would travel in a given time, but in the avenue/street basis if you wanted to know how city blocks I would travel through in that time) but it's the same vector either way, and I could do either calculation in either basis and I would get the same answer.

For a real example from quantum mechanics, consider the beam of particles that emerge deflected upwards from a vertically oriented Stern-Gerlach device. Their state is described by a vector in Hilbert space; I will write that vector as ##(1,0)## in the vertical basis and ##(\sqrt{2}/2,\sqrt{2}/2)## in the horizontal basis, but it's the same vector either way. The physical significance of this vector is that it represents a state in which a vertical spin measurement will yield spin-up with 100% probability and a horizontal spin measurement will yield spin-left or spin-right with 50% probability. I can obtain these results no matter what basis I've written the vector in; it's just that that the calculation of the probability along a given axis is a lot easier (as in, I can do it by inspection) if I've written the vector using a basis corresponding to that axis.
 
  • #28
Nugatory said:
Of course, but changing the basis changes neither the vector nor the physical predictions we make when we operate on the vector. One form may be more convenient for calculation than another (for example, in my street grid you would write my velocity vector in the compass basis if you wanted to know how far north I would travel in a given time, but in the avenue/street basis if you wanted to know how city blocks I would travel through in that time) but it's the same vector either way, and I could do either calculation in either basis and I would get the same answer.

For a real example from quantum mechanics, consider the beam of particles that emerge deflected upwards from a vertically oriented Stern-Gerlach device. Their state is described by a vector in Hilbert space; I will write that vector as ##(1,0)## in the vertical basis and ##(\sqrt{2}/2,\sqrt{2}/2)## in the horizontal basis, but it's the same vector either way. The physical significance of this vector is that it represents a state in which a vertical spin measurement will yield spin-up with 100% probability and a horizontal spin measurement will yield spin-left or spin-right with 50% probability. I can obtain these results no matter what basis I've written the vector in; it's just that that the calculation of the probability along a given axis is a lot easier (as in, I can do it by inspection) if I've written the vector using a basis corresponding to that axis.

I think I have addressed most of everyones concerns in post #7. I actually think that is a good post and I do not want it to be lost. I was responding to Vanhees post where he said, basically, that probabilities are invariant under a change of basis. I think I did a good job of simply explaining why probabilities are invariant under a change of basis. In particular, Equation (2) below shows this. I also recognized that equations like the following are also true ##\alpha |\psi\rangle \neq \alpha| \phi\rangle##. I did not realize, immediately, that the different state vectors actually were two different representations for the same state. Equation (2) is the reason that all the expectation values etc are invariant, because the inner product has been preserved.(Which, I see now is obvious, because ##|\psi## and ## |\phi## are just two different representations for the same vector. )

Here it is again.
A state vector ##|\psi\rangle## defines a point in Hilbert Space. The state vector has components given by the projection of that point onto some orthonormal basis. For operator ##A## the orthonormal basis are the eigenvectors of ##\hat{A}##. The length squared of the state vector (which is the distance of the point from the origin) is given by the inner product of the vector with itself, \begin{equation}L^2=\langle\psi|\psi\rangle\end{equation} When we change the coordinate system, which I am calling a change of basis, by some unitary transformation (ie a transformation which preserves the inner product) then the same point will be described by a new state vector in the new basis as ##|\phi\rangle##. However the length of the vector will be invariant under the transformation, which implies that \begin{equation}L^2 = \langle\psi|\psi\rangle=\langle\phi|\phi\rangle\end{equation}Which means that I should calculate the same probabilities for the same observable regardless of the basis in which the state vectors are described.
 
Last edited:
  • #29
Again: The vector ##|\psi \rangle## is independent of any basis. It can be decomposed wrt. to any complete orthonormal set ##|a_k \rangle## via
$$|\psi \rangle=\sum_k |a_k \rangle \langle a_k|\psi \rangle.$$
A unitary transformation is not a basis transformation but a map from vectors in ##\mathcal{H}## to vectors in ##\mathcal{H}## such that all scalar products of two vectors in ##\mathcal{H}## are not changed.

Quantum theory is invariant under arbitrary unitary transformations in the sense that if you map all vectors as
$$|\psi' \rangle=\hat{U} \psi \rangle, \quad \hat{U}^{-1}=\hat{U}^{\dagger},$$
and all operators via
$$\hat{A}'=\hat{U} \hat{A} \hat{U}^{\dagger},$$
nothing changes.

A basis transformation is given by a unitary matrix, i.e., if ##|a_k \rangle## and ##|a_k' \rangle## are two such complete orthonormal sets you have
$$|a_k \rangle = \sum_j |a_j' \rangle \langle a_j'| a_k \rangle=\sum_j U_{jk} |a_k \rangle,$$
as for any vector. Now for any vector ##|\psi \rangle## you have
$$|\psi \rangle = \sum_{k} |a_k \rangle \langle a_k |\psi = \sum_{j,k} \langle a_j' \rangle \langle a_j'|a_k \rangle \langle a_k|\psi \rangle,$$
i.e.,
$$\psi_j'=\sum_k U_{jk} \psi_k.$$
From the orthonormality of both complete sets you get the unitarity of the matrix ##U_{jk}##, i.e.,
$$\sum_{j} U_{jk}^* U_{jl}=\delta_{kl}.$$

The probalities to find a certain value for an observable, are trivially independent on the choice of basis, with help of which you evaluate them, because they do not depend on a special basis anyway. If ##|a,\beta \rangle## are the orthonormalized eigenvectors of ##\hat{A}## (representing an observable ##A##) and the system is in a pure state represented by a normalized vector ##|\psi \rangle##, then the probability is given by
$$P(a)=\sum_{\beta} |\langle a,\beta|\psi \rangle|^2.$$
Of course the set ##|a,\beta \rangle## with ##a## running over all eigenvalues of ##\hat{A}## themselves also are a complete orthonormalized set, because ##\hat{A}## is self-adjoint.
 
Last edited:
  • #30
vanhees71 said:
The probalities to find a certain value for an observable, are trivially independent on the choice of basis, with help of which you evaluate them, because they do not depend on a special basis anyway. If ##|a,\beta \rangle## are the orthonormalized eigenvectors of ##\hat{A}## (representing an observable ##A##) and the system is in a pure state represented by a normalized vector ##|\psi \rangle##, then the probability is given by
$$P(a)=\sum_{\beta} |\langle a,\beta|\psi \rangle|^2.$$
Of course the set ##|a,\beta \rangle## with ##a## running over all eigenvalues of ##\hat{A}## themselves also are a complete orthonormalized set, because ##\hat{A}## is self-adjoint.

This summation over ##\beta## keeps popping up in this thread. What does it mean? Let's say the Hilbert space is finite-dimensional, then for a self-adjoint operator ##\widehat{A}## we have a spectral decomposition
$$
\widehat{A} = \sum_{a \in \sigma(\widehat{A})} a \widehat{\Pi}_a ,
$$
where ## \widehat{\Pi}_a## are projectors on corresponding eigenspaces and when they are one-dimensional ## \widehat{\Pi}_a = |a \rangle \langle a|##. Then, for a system in state ##\psi##, the probability to observe value ##a## is
$$
P(a) = \langle \widehat{\Pi}_a \rangle = \langle \psi | \widehat{\Pi}_a | \psi \rangle = |\langle a | \psi \rangle|^2,
$$
where the first equality holds for mixtures as well and the last one is for one-dimensional eigenspaces. I don't think there's any need in additional summations. Or these are summations over spans of eigenspaces?

The confusions in the initial post were in the last two questions

mike1000 said:
Are the calculated probabilities of measuring the same event going to be different when measured relative to the two different basis? WIll some states which were observable in the ##A## basis not be observable in the ##B## basis?

The first question here refers to an 'event', which requires definition. The second question silently presumes that states are observable (they're not). I hope, OP cleared these things up for himself.
 
Last edited:
  • #31
Well, in general the to an eigenvalue ##a## there are several linearly independent eigenvectors (in QT one talks about "degeneracy"), and then I label them as ##|a,\beta\rangle##. You can always choose them as orthonormal sets. For a pure state represented by a normalized ##|\psi \rangle## the probability to measure the value ##a## of the observable ##A## then is
$$P(a)=\sum_{\beta} |\langle a,\beta|\psi \rangle|^2.$$
That's of course the same as
$$P(a)=\sum_{\beta} \langle a,\beta|\hat{\rho}|a,\beta \rangle$$
with
$$\hat{\rho}=|\psi \rangle \langle \psi|.$$
 
  • Like
Likes mikeyork

1. What is the concept of change of basis in quantum mechanics?

Change of basis is a mathematical operation that allows us to express a quantum state in terms of a different set of basis states. In quantum mechanics, a basis is a set of states that can be used to represent any other state. By changing the basis, we can gain a different perspective on the state and make calculations easier.

2. How does superpositioning of states work in quantum mechanics?

Superpositioning of states is a fundamental principle in quantum mechanics that states a quantum system can exist in multiple states simultaneously. This means that the system can be in a combination of states, rather than being confined to a single state. This is represented by a mathematical expression called a wavefunction, which contains information about the probability of finding the system in each state.

3. What is the significance of change of basis and superpositioning of states in quantum computing?

Change of basis and superpositioning of states are crucial concepts in quantum computing. By changing the basis and utilizing superposition, quantum computers can perform calculations much faster and more efficiently than classical computers. This is because quantum computers can process multiple states at once, allowing for parallel computation.

4. Can you give an example of change of basis and superpositioning of states in quantum mechanics?

One example of change of basis and superpositioning of states is the quantum bit, or qubit. A qubit can exist in a superposition of two basis states, representing both 0 and 1 simultaneously. By changing the basis, we can manipulate the qubit to perform different calculations and operations.

5. How does understanding change of basis and superpositioning of states contribute to advancements in technology?

Understanding change of basis and superpositioning of states has led to the development of quantum technologies such as quantum computers, quantum cryptography, and quantum sensors. These technologies have the potential to greatly impact various industries, from finance and healthcare to communication and security, by providing faster and more secure solutions.

Similar threads

  • Quantum Physics
Replies
2
Views
967
Replies
3
Views
867
Replies
27
Views
2K
Replies
9
Views
1K
Replies
16
Views
1K
Replies
5
Views
912
Replies
7
Views
1K
Replies
6
Views
1K
Replies
1
Views
961
Replies
4
Views
2K
Back
Top