A few questions about Griffiths book

  • Context: Undergrad 
  • Thread starter Thread starter Isaac0427
  • Start date Start date
  • Tags Tags
    Book Griffiths
Click For Summary

Discussion Overview

This thread discusses specific equations from Griffiths' introduction to quantum mechanics, particularly focusing on the derivation and understanding of equations 3.114 and 3.116. The scope includes conceptual clarifications and technical explanations related to quantum mechanics and operator theory.

Discussion Character

  • Technical explanation
  • Conceptual clarification
  • Debate/contested

Main Points Raised

  • The original poster (OP) expresses confusion regarding how Griffiths derives the integral in equation 3.114 and the definition of the standard deviation in equation 3.116.
  • Some participants note that the OP is using the 1st edition of the book, which differs significantly from the 2nd edition, leading to potential misunderstandings.
  • One participant suggests that the integral related to equation 3.114 can be obtained from equation 1.17, while the standard deviation is defined in equation 1.19.
  • Another participant questions the accuracy of Griffiths' text, citing confusion among readers as a recurring issue.
  • Some participants propose viewing the integral as a sum to aid understanding of the expectation value of an observable.
  • There is a suggestion that the OP might benefit from exploring alternative texts, such as Ballentine's "QM -- A Modern Development," which may present the material more clearly.
  • Further discussion includes the relationship between equations 1.28 and 3.114, with participants debating the use of abstract versus concrete Hilbert space representations.

Areas of Agreement / Disagreement

Participants express differing views on the clarity and accuracy of Griffiths' explanations, with some agreeing on the need for more elaboration on the derivations, while others remain skeptical about the text's reliability. The discussion does not reach a consensus on the best approach to understanding the equations in question.

Contextual Notes

There are limitations regarding the differences between editions of Griffiths' book, which may affect the understanding of the equations discussed. Additionally, the discussion reflects varying levels of familiarity with quantum mechanics concepts among participants.

  • #31
Twigg said:
Ok, this is definitely a terminology issue. Sorry if I made anything unclear. To avoid further confusion about terminology, I'll stick empirical examples.

The expectation value of an operator is the average value you would measure. For example, the expectation value of the angular momentum operator (##\langle L_{z} \rangle##) operating on the ground state wavefunction of the hydrogen atom (##|\psi_{100}\rangle##) is just the angular momentum you would measure on average if you had a large sample of (non-interacting) ground state hydrogen atoms. To clarify, if you picked atoms one by one out of this sample and measured their ground state angular momentum, the average of your data would be ##\langle L_{z} \rangle##, aka the expectation value of the angular momentum operator in the ground state.
My intention was to compare the version that Griffiths uses for the standard deviation of an observable to the standard definition of the standard deviation for a continuous variable, which you mentioned you understand. If this is different from what you're used to, the definitions I've used can be found and are motivated in Sections 5 and 6 of Chapter 15 of the 3rd Edition of Mary Boas's book Mathematical Methods in the Physical Sciences.

To paraphrase briefly, the standard deviation is the average "spread" of a function, in the sense of a least-squares fit. Notice that ## Q - \langle q \rangle ## is the distance of the function Q from it's average (aka expected value), ##\langle q \rangle##. At each point in space (i.e. each value of x), the distance ##Q - \langle q \rangle## can vary. The quantity ##(Q - \langle q \rangle)^{2}##, the square of that distance (as a function of x), is a good measure of how far the function Q has varied from its average (at the point x). To extend this to the whole distribution (whereas before it only applied to a single point x), we take the average of the quantity ##(Q - \langle q \rangle)^{2}##, which is called the variance, ## Var_{Q} = \int (Q - \langle q \rangle )^{2} \rho(x) dx##. The variance is how much the function Q differs from it's average value ##\langle q \rangle##, squared, on average. The standard deviation is defined as the square root of the variance.

Hope this helps!
Can you please put this in terms of opperators and observables? Thanks.
 
Physics news on Phys.org
  • #32
To clarify, I don't see how you can find the difference between ##-i\hbar \partial_x## and ##\hbar k##, and how you can take the expectation value of the former.
 
  • #33
We're at a point where leaving the hats off the operators can lead to serious ambiguity... I know, people do it all the time, but it's sloppy and they only get away with it when everyone in the discussion can figure out from the context when a symbol represents an operator. With latex you put the hats on using \hat - for example, \hat{X} renders as ##\hat{X}##.
 
  • #34
Isaac0427 said:
To clarify, I don't see how you can find the difference between ##-i\hbar \partial_x## and ##\hbar k##, and how you can take the expectation value of the former.

##\hat{p}=-i\hbar \partial_x## is the momentum operator. Its expectation value is ##\int\psi^*\hat{p}\psi=-\frac{i}{\hbar}\int\psi^*\partial_x\psi##.

##\hbar k## is also an operator (albeit a rather trivial one), and its expectation value is ##\int\psi^*\hbar k\psi=\hbar k\int\psi^*\psi=\hbar k##.

If ##\hat{A}## and ##\hat{B}## are both operators, then the operator ##\hat{C}=\hat{A}-\hat{B}## is the operator that satisfies ##\hat{C}\psi=\hat{A}\psi-\hat{B}\psi##. This is the definition of subtraction for linear operators, and you can use it to find the difference between the operators ##-i\hbar \partial_x## and ##\hbar k##.
 
Last edited:
  • #35
Nugatory said:
##\hat{p}=-i\hbar \partial_x## is the momentum operator. Its expectation value is ##\int\psi^*\hat{p}\psi=-\frac{i}{\hbar}\int\psi^*\partial_x\psi##.

##\hbar k## is also an operator (albeit a rather trivial one), and its expectation value is ##\int\psi^*\hbar k\psi=\hbar k\int\psi^*\psi=\hbar k##.

If ##\hat{A}## and ##\hat{B}## are both operators, then the operator ##\hat{C}=\hat{A}-\hat{B}## is the operator that satisfies ##\hat{C}\psi=\hat{A}\psi-\hat{B}\psi##. This is the definition of subtraction for linear operators, and you can use it to find the difference between the operators ##-i\hbar \partial_x## and ##\hbar k##.
So, the expectation value of ##\hat Q## is the same as the expectation value of its eigenvalues, correct? I was using ##\hbar k## not as an opperator but as the eigenvalue of the momentum opperator (again, terminology may not be great but you know what I mean). So, given that, wouldn't the expectation value of ##\hbar k## (as an observable, not an opperator) be the same as the expectation value of ##\hat p##? Evaluating the integral with keeping in mind that the momentum opperator is Hermitian appears to give me momentum or ##\hbar k##. Again, this raises the question on why we can say that the difference between the momentum opperator and the expectation value of momentum has anything to do with the standard deviation.
 
  • #36
Apologies for my loose notation in previous posts.

Isaac0427 said:
So, the expectation value of ^QQ^\hat Q is the same as the expectation value of its eigenvalues, correct?

That's correct. The eigenvalues of an observable are the expectation values of the corresponding operator when the wavefunction is the correspond eigenstate. In math, ## q_{n} = \int \psi^{*}_{n} \hat{Q} \psi_{n} dx##, where ##\psi_{n}## is the n-th eigenstate corresponding to ##q_{n}## the n-th eigenvalue of the operator ##\hat{Q}##. That is in part why you can re-write the expectation value of ##\hat{Q}## in any state in terms of its eigenvalues ##q_{n}##, from a mathematical point of view. If an arbitrary state is given by ##\psi = \sum_{n} c_{n} \psi_{n}##, then the connection between the expectation value of ##\hat{Q}## and the eigenvalues ##q_{n}## is given by ##\langle \hat{Q} \rangle = \sum_{n} |c_{n}|^{2} q_{n}##. (Watch out for non-normalizable wavefunctions though. These formulas are not accurate in that case.)

Nugatory said:
##\hbar k## is also an operator (albeit a rather trivial one), and its expectation value is ##\int\psi^*\hbar k\psi=\hbar k\int\psi*\psi=\hbar k##.

I may be wrong, but I think this is a bit misleading. By treating k as a constant (by pulling it out of the integral like that), it's assumed that the wavefunction is something like a perfectly single-frequency plane wave. That isn't generally the case though. In general, the wavevector operator is given as $$\hat{k_{x}} = -i \frac{\partial}{\partial x}$$ and its expectation value is $$\langle \hat{k_{x}} \rangle = \int_{-\infty}^{\infty} \psi^{*}(x) (-i) \frac{\partial \psi}{\partial x} dx$$.

Isaac0427 said:
So, given that, wouldn't the expectation value of ℏkℏk\hbar k (as an observable, not an opperator) be the same as the expectation value of ^pp^\hat p? Evaluating the integral with keeping in mind that the momentum opperator is Hermitian appears to give me momentum or ℏkℏk\hbar k.

Can you show how you evaluated it? In general, one needs a wavefunction to take expectation values, so I'm not sure what you mean. I may be missing something.

Isaac0427 said:
Can you please put this in terms of opperators and observables? Thanks.

The point I'm trying to get across is that the observable is the operator (they are the same thing, as far as the math is concerned), and the value you'd see in a measurement (on average) is the expectation value of that operator. In short, an observable in quantum mechanics is a hermitian linear operator ##\hat{Q}##, which can be measured in the lab, giving us an average measured value of ##\langle \hat{Q} \rangle## (the expectation value) with a standard deviation of the measured values given by ##\sqrt{\langle (\hat{Q} - \langle \hat{Q} \rangle)^{2} \rangle}## (I know I haven't answered your other question about this definition yet, I'll address that in the next section of this post).

Isaac0427 said:
Again, this raises the question on why we can say that the difference between the momentum opperator and the expectation value of momentum has anything to do with the standard deviation.

The definition of the standard deviation of an observable in terms of the difference of its operator and expectation values is identical (at least for position and momentum, that I can prove) to the usual definition of the standard deviation of a continuous random variable in general statistics. (This is what I attempted to show two posts back.) So, before I can answer your question, I need to know: are you confused at all by the general definition (outside of quantum mechanics) of the standard deviation for a continuous random variable, where the standard deviation of a continuous random variable x is defined ##\sigma_{x} = \sqrt{\int (x - \langle x \rangle)^{2} \rho(x) dx}## where ##\rho(x)## is the probability density function and ##\langle x \rangle ## is the expected value of x? I need to know if your question is really about the general statistical definition or about the quantum mechanical version of it before I can help. Are you comfortable with the general definition given in statistics but confused about the way it translates into quantum mechanics? Or are you confused about the general definition used in statistics?
 
  • #37
Isaac0427 said:
So, the expectation value of ##\hat Q## is the same as the expectation value of its eigenvalues, correct?
[After some sleep, breakfast and a couple of cups of coffee, I rewrote this a bit to make the notation more consistent]

Suppose an operator ##\hat Q## has a set of eigenstates ##\psi_i## with corresponding eigenvalues ##Q_i## so that ##\hat Q \psi_i = Q_i \psi_i##. These eigenstates are orthogonal, i.e. ##\int \psi_i^* \psi_j dx = 0## for ##i \ne j##. Assume they're also normalized, i.e. ##\int \psi_i^* \psi_i dx = 1##

If the state is an eigenstate of ##\hat Q##, i.e. one of the ##\psi_i##, then then the expectation value of ##\hat Q## for that state is the corresponding eigenvalue, ##Q_i##: $$\langle \hat Q \rangle = \int \psi_i^* \hat Q \psi_i dx = \int \psi_i^* Q_i \psi_i dx = Q_i \int \psi_i^* \psi_i dx = Q_i$$ If the state is not an eigenstate of ##\hat Q##, then it can be written as a linear combination of the eigenstates: $$\psi = \sum_i c_i \psi_i$$ The expectation value of ##\hat Q## for this state is $$\langle \hat Q \rangle = \int \psi^* \hat Q \psi dx = \int \left( \sum_i c_i^* \psi_i^* \right) \hat Q \left( \sum_j c_j \psi_j \right) dx$$ which works out to be $$\langle \hat Q \rangle = \sum_i c_i^* c_i Q_i$$ That is, the expectation value of ##\hat Q## is a weighted average of the eigenvalues ##Q_i##, which I suppose one could call the "expectation value of the eigenvalues" although I don't remember seeing anyone ever actually use that phrase. (Corrections and references welcome!)
 
Last edited:
  • Like
Likes   Reactions: DrClaude
  • #38
Thanks guys, I think I understand it now.
 
  • #39
Something which no one seems to have pointed out (I may have missed it and it may not have been the issue anyway) is (using stats notation, where ##\mu = E[X]##):

##Var(X) = E[(X-\mu)^2] = E[X^2 - 2\mu X + \mu^2] = E[X^2] - 2\mu E[X] + \mu^2 = E[X^2] - 2\mu^2 + \mu^2 = E[X^2] - \mu^2##

Hence:

##Var(X) = E[(X-\mu)^2]## (by definition)

and

##Var(X) = E[X^2] - E[X]^2##
 
  • Like
Likes   Reactions: vanhees71, Isaac0427 and Twigg

Similar threads

  • · Replies 9 ·
Replies
9
Views
2K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 20 ·
Replies
20
Views
2K
  • · Replies 3 ·
Replies
3
Views
1K
  • · Replies 2 ·
Replies
2
Views
4K
  • · Replies 10 ·
Replies
10
Views
3K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 1 ·
Replies
1
Views
1K
  • · Replies 2 ·
Replies
2
Views
3K