A few questions about Griffiths book

In summary: Could you write those two equations in latex and post them? That might get this past the digression about different versions...(Actually I'm pretty sure that I can guess what they are but if I happen to guess wrong, or just assume a different notation than you're looking at, I'll end up introducing unnecessary noise into the thread - so let's do it right).Neither of those have anything to do with my questions.
  • #1
Isaac0427
Insights Author
716
162
In Griffiths intro to quantum mechanics, there are a few things that I feel like he gets from nowhere, he just states it and doesn't derive it or prove it.

First is equation 3.114, using opperators to get the expectation value of an observable. I get how he got the inner product from the integral, but I don't get how he got the integral in the first place.

Second, in equation 3.116 which is later used in section 3.4, how does he get that definition for the standard deviation?

Also, can we please not focus on the technicalities-- for example, if "derive" is not the correct term, you still know what I mean, and can you not use an entire post correcting me? I'd like to focus this thread on my two questions.

Thanks!
 
Physics news on Phys.org
  • #2
Can you cite the formulae? In my 2nd edition of the book, there are no Eqs. 3.114 and 3.116 :-(.
 
  • #3
The OP uses the 1st edition, which substantially differs from the 2nd edition.

The integral related to [3.114] is obtained from [1.17]. The standard deviation is defined in [1.19].
 
Last edited:
  • Like
Likes dextercioby
  • #4
Well, I'm not so convinced about this book, given the many questions in this forum, which turn out to originate from confusion caused by inaccuracies. Unfortunately, I don't have the 1st edition.
 
  • Like
Likes S.G. Janssens
  • #5
Demystifier said:
The OP uses the 1st edition, which substantially differs from the 2nd edition.

The integral related to [3.114] is obtained from [1.17]. The standard deviation is defined in [1.19].
Neither of those have anything to do with my questions.
 
  • #6
vanhees71 said:
Well, I'm not so convinced about this book, given the many questions in this forum, which turn out to originate from confusion caused by inaccuracies. Unfortunately, I don't have the 1st edition.
The first addition is free online.
 
  • #7
It might help to think about the integral as a sum first. Think how you might write the expectation of an eigenvalue if there were only three possible eigenstates and eigenvalues.
 
  • #8
Isaac0427 said:
The first addition is free online.

I doubt it is a legit copy. Yes, the second edition already appeared, but the original publisher won't give up on his copyright of the first, because it is a source of cash, even while not in print anymore. Perhaps Griffiths got the permission from the publisher to disseminate on his website the draft of his first edition, but that's also highly unlikely. AFAIK, it is the other way around. Some teachers (Carroll, Srednicki, Teschl, etc.) first publish notes on their website, thus for free, then later refine them into a book which they get published. If we don't have the cash for the book, at least we can still use the original notes.
 
  • #9
Isaac0427 said:
First is equation 3.114, using operators to get the expectation value of an observable. I get how he got the inner product from the integral, but I don't get how he got the integral in the first place.

Second, in equation 3.116 which is later used in section 3.4, how does he get that definition for the standard deviation?
Could you write those two equations in latex and post them? That might get this past the digression about different versions...

(Actually I'm pretty sure that I can guess what they are but if I happen to guess wrong, or just assume a different notation than you're looking at, I'll end up introducing unnecessary noise into the thread - so let's do it right).
 
  • Like
Likes gulfcoastfella, DrewD and berkeman
  • #10
Isaac0427 said:
Neither of those have anything to do with my questions.

If you use the 1st Edition, then Demistifier's post is 100% correct, but probably needs more elaboration, for you.
 
  • Like
Likes Demystifier
  • #11
Isaac0427 said:
In Griffiths intro to quantum mechanics, there are a few things that I feel like he gets from nowhere, he just states it and doesn't derive it or prove it.

Oh my gawd (referring to both the string of non-answer posts in this thread, AND to the way Griffiths does things). <sigh>

First is equation 3.114, using operators to get the expectation value of an observable. I get how he got the inner product from the integral, but I don't get how he got the integral in the first place.
The integral is a generalization of formulas he already wrote in ch1, e.g., eq(1.28). Unfortunately, he wrote: $$\langle x\rangle ~=~ \int_{-\infty}^{+\infty} x |\Psi(x,t)|^2 dx ~.$$ But to relate this to the integral before (3.114) one needs to re-express (1.28) as $$\langle x\rangle ~=~ \int_{-\infty}^{+\infty} \Psi^* x \Psi \, dx ~,$$and think of the ##x## as an operating acting to its right.

Imho, it's better to follow a development where (3.114) in abstract Hilbert space notation comes much earlier.

Second, in equation 3.116 which is later used in section 3.4, how does he get that definition for the standard deviation?
This is actually "standard deviation of the mean", otherwise known as Variance. That Wiki article explains variance and (if you look up Wiki's info on standard deviation, you'll (eventually) get down a section showing its relation to the variance. It's basically the ordinary theory of probability, except that expectation values are expressed in terms of quantum states.

[Aside: I sense you're kinda jumping into all this stuff ahead of a "normal" person's schedule, but you might like to take a look at Ballentine's "QM -- A Modern Development". Even though that's a graduate-level text, if you can follow basic calculus and linear algebra, you might be able to read him. He develops lots of this stuff better than Griffiths (imho).]
 
  • Like
Likes gulfcoastfella
  • #12
strangerep said:
The integral is a generalization of formulas he already wrote in ch1, e.g., eq(1.28). Unfortunately, he wrote:
⟨x⟩ = ∫+∞−∞x|Ψ(x,t)|2dx .⟨x⟩ = ∫−∞+∞x|Ψ(x,t)|2dx .​
\langle x\rangle ~=~ \int_{-\infty}^{+\infty} x |\Psi(x,t)|^2 dx ~. But to relate this to the integral before (3.114) one needs to re-express (1.28) as
⟨x⟩ = ∫+∞−∞Ψ∗xΨdx ,⟨x⟩ = ∫−∞+∞Ψ∗xΨdx ,​
\langle x\rangle ~=~ \int_{-\infty}^{+\infty} \Psi^* x \Psi \, dx ~,and think of the xxx as an operating acting to its right.

Imho, it's better to follow a development where (3.114) in abstract Hilbert space notation comes much earlier.
See, I completely understand equation 1.28 but not 3.114. Are they the same thing should the opperator ##\hat x## be Hermitian? If yes, why use one over the other? If no, when would you use each of them and again, where does he get equation 3.114?
 
  • #13
Isaac0427 said:
See, I completely understand equation 1.28 but not 3.114. Are they the same thing should the opperator ##\hat x## be Hermitian? If yes, why use one over the other? If no, when would you use each of them and again, where does he get equation 3.114?
3.114 is just a more general notation, which was already used earlier in ch3, such as in sect 3.1.2 "Inner Products". (Heh, did you study that section or skim over it?)

The operator ##\hat x## is indeed Hermitian.

Why use one over the other? One must bear in mind the distinction between "abstract Hilbert space" and "concrete Hilbert space". For a general dynamical system, important observables are position and momentum. We can express them in general (abstract) terms on an abstract Hilbert space, (since the canonical commutation relations give us an abstract algebra to work with). We can also employ various powerful theorems about Hilbert spaces and Hermitian operators: e.g., the eigenvalues of Hermitian operators are necessarily real, and the eigenvectors of an Hermitian operator span the Hilbert space (meaning that any vector in the Hilbert space can be expressed as a linear combination of those eigenvectors). Such general theorems are often powerful enough to let us deduce the spectrum of the observables associated to a particular dynamical scenario. (By spectrum, I mean the set of eigenvalues associated to each Hermitian operator.) However, if we wish to consider specific cases, a concrete Hilbert space may be more useful, e.g., a function space in which the Hilbert inner product is represented as an integral. The concrete Hilbert space is essentially the abstract space equipped with more detail appropriate to the scenario being considered.

A good example is quantum angular momentum. One can derive the (surprising, counter-intuitive) result that the angular momenta of quantum particles are always integers or half-integers, using only the requirement that the usual 3D group of spatial rotations be represented as unitary operators on an abstract Hilbert space. The more one mediates upon that, the more astonishing it seems (imho).
 
  • #14
strangerep said:
(Heh, did you study that section or skim over it?)
I did study it. But, because he says that if T is Hermitian (I know, I'm leaving off the hat), then T<a|b>=<Ta|b>=<a|Tb>. Given that, I am wondering why we write the integral one way as opposed to the other, and also why we don't say that <q>=Q ∫|ψ|2dx where Q is the Hermitian operator associated with the observable q.
 
  • #15
Isaac0427 said:
I did study it. But, because he says that if T is Hermitian (I know, I'm leaving off the hat), then T<a|b>=<Ta|b>=<a|Tb>.
I'd bet (or, at least, hope) that he doesn't actually say that. Where are you reading this from? (If you're thinking of eq 3.29, that's not what it says.)

[...] also why we don't say that <q>=Q ∫|ψ|2dx where Q is the Hermitian operator associated with the observable q.

##Q## is an operator: it takes one vector and gives you another. You can't just pull it outside the inner product.

Do you understand that the concrete version of ##\langle \Psi | \Phi \rangle## is $$\int_{-\infty}^\infty \Psi^* \Phi \, dx ~~ ?$$ And also that the concrete version of ##\langle \Psi | Q \Phi \rangle## is $$\int_{-\infty}^\infty \Psi^*(x,t) \, x \, \Phi(x,t) dx ~~ ?$$ And the concrete version of ##\langle \Psi | \hat p \Phi \rangle## is $$\int_{-\infty}^\infty \Psi^*(x,t) \, \left( \frac{\hbar}{i} \, \frac{\partial}{\partial x} \Phi(x,t) \right) dx ~~ ?$$

[Edit: watch out for my edits in the above.]
 
  • #16
Isaac0427 said:
T<a|b>=<Ta|b>=<a|Tb>

The first third of that is false. Try that for the observable x with <a| = <##\psi##| and |b> = |##\psi##> and you'll get <x> = x (since <##\psi##|##\psi##> = 1), which makes no sense. That would be saying "the expected value of x is x". The expected value of x should be a number, not an observable. The second two thirds of that equation are correct though.

Isaac0427 said:
also why we don't say that <q>=Q ∫|ψ|2dx where Q is the Hermitian operator associated with the observable q.

Given that if Q is hermitian ##Q^{*} = Q##, then $$<q> = \int \psi^{*} (Q \psi) dx = \int \psi^{*} Q \psi dx = \int \psi^{*} Q^{*} \psi dx = \int (Q \psi)^{*} \psi dx$$

It doesn't matter which way you want to write it, but ##\int \psi^{*} Q \psi dx## should be familiar.

The reason why Griffiths can say ##<x> = \int x |\psi|^{2} dx## is because x is a real-valued function, so it can be pulled out like a constant factor inside the integral. But it can't be pulled outside the integral, because it is a (very simple) function of the integration variable. Other operators, like momentum, can't be pulled in front of the psi's like x can. Try evaluating both ##\int \psi^{*} (p \psi) dx## and ##\int (p \psi)^{*} \psi dx##, keeping in mind that ##\psi## goes to 0 at infinity. You'll need to do integration by parts to show they're equivalent.
 
  • #17
strangerep said:
Oh my gawd (referring to both the string of non-answer posts in this thread, AND to the way Griffiths does things). <sigh>
Well, it's the OP's own fault not even citing the equations he is discussing and not even provide complete information about the book he is referring to. Concerning Griffiths's QM textbook I couldn't agree more. There are tons of good books out there. So why are so many using this one? My recommendation is to start with Sakurai and then have a look at Ballentine.
 
Last edited by a moderator:
  • #18
##\Psi## is an infinite sum of wavefunctions ##\psi_j## that have a definite value of q (i.e. an eigenfunction of Q) ##q_j##. The integral can be evaluated as
##\int \Psi^*\sum_{j=1}^{\infty}p_j\psi_jdx=\int \sum_{j=1}^{\infty}\Psi^*p_j\psi_jdx##, correct? How do you go from this to <q>?
 
  • #19
I don't understand the notation. If it's about the expectation value of momentum, given that the particle is prepared in a state described by a wave function ##\psi## you get
$$\langle p \rangle=\langle \psi|\hat{p} \psi \rangle=\int_{\mathbb{R}} \mathrm{d} x \langle \psi|x \rangle \langle x|\hat{p} \psi \rangle = \int_{\mathbb{R}} \mathrm{d} x \psi^*(x) \frac{1}{\mathrm{i} \hbar} \partial_x \psi(x).$$
I have used that the momentum operator in the position representation is given by
$$\langle x|\hat{p} \psi \rangle=\frac{1}{\mathrm{i} \hbar} \partial_x \psi(x), \quad \psi(x)=\langle x|\psi \rangle.$$

You can derive this from the commutator relation
$$[\hat{x},\hat{p}]=\mathrm{i} \hbar \hat{1}.$$
To this end the heuritistics is that the momentum operator is the generator of spatial translations, which means that
$$|x + \delta x \rangle-|x \rangle=-\delta x \frac{\mathrm{i}}{\hbar} \hat{p} |x \rangle,$$
which implies by dividing by ##\delta x## and letting ##\delta x \rightarrow 0##
$$\partial_x |x \rangle=-\frac{\mathrm{i}}{\hbar} \hat{p} |x \rangle$$
or
$$\hat{p} |x \rangle=-\frac{\hbar}{\mathrm{i}} \partial_x |x \rangle.$$
This leads to
$$\hat{p} \psi(x):=\langle x|\hat{p} \psi \rangle=\langle \hat{p} x|\psi \rangle=\frac{\hbar}{\mathrm{i}} \partial_x \psi(x).$$
For the momentum eigenstate you get
$$\hat{p} u_p (x)=\frac{\hbar}{\mathrm{i}} \partial_x u_p(x)=p u_p(x).$$
This differential equation has the solution
$$u_p(x)=N_p \exp \left (\frac{\mathrm{i} x p}{\hbar} \right).$$
To normalize the state we define
$$\langle p|p ' \rangle=\delta(p-p') \; \Rightarrow \; \int_{\mathbb{R}} \mathrm{d} x u_p^*(x) u_{p'}(x)=N_p^* N_{p'} \int_{\mathbb{R}} \mathrm{d}x \exp \left (\frac{\mathrm{i}x(p'-p)}{\hbar} \right)=|N_p|^2 2 \pi \hbar \delta(p-p') \; \Rightarrow \; N_p=\frac{1}{\sqrt{2 \pi \hbar}}.$$
This finally gives
$$u_p(x)=\frac{1}{(2 \pi \hbar)^{1/2}} \exp \left (\frac{\mathrm{i} p x}{\hbar} \right).$$
 
  • #20
Isaac0427 said:
##\Psi## is an infinite sum of wavefunctions ##\psi_j## that have a definite value of q (i.e. an eigenfunction of Q) ##q_j##. The integral can be evaluated as
##\int \Psi^*\sum_{j=1}^{\infty}p_j\psi_jdx=\int \sum_{j=1}^{\infty}\Psi^*p_j\psi_jdx##, correct? How do you go from this to <q>?
The expectation value <q> is just the weighted sum of each possible value of q times its probability. If 20% of the families on my street have three children and 80% have two children, the expectation value for the number of children in any given family is ##.2*\times{2}+.8\times{3}=2.8##; that doesn't mean that there exists any household with 2.8 children, but rather that if I pick ten households at random I should expect to find a total of 28 children.

Applied to QM: I write the wave function as a sum of eigenfunctions of an operator A. If ##\psi_i## is the eigenfunction with eigenvalue ##\alpha_i##, then the appearance of a term ##c_i\psi_i## implies that a measurement of A will yield the value ##\alpha_i## with probability ##|c_i|^2##; sum all of these and we'll have the expectation value <A>.

You can work out for yourself that the integral is doing that summation. You'll need to use the facts that:
- ##\int(a+b+c+...)=\int{a}+\int{b}+\int{c}+...##
- ##\int\psi_i^*\psi_j## equals one or zero according to whether i equals j or not.
- ##A\psi_i=\alpha_i\psi_i##, and because ##\alpha_i## is a constant you can move it to the left out of the integral.
 
  • Like
Likes gulfcoastfella, Isaac0427 and dextercioby
  • #21
Nugatory said:
The expectation value <q> is just the weighted sum of each possible value of q times its probability. If 20% of the families on my street have three children and 80% have two children, the expectation value for the number of children in any given family is ##.2*\times{2}+.8\times{3}=2.8##; that doesn't mean that there exists any household with 2.8 children, but rather that if I pick ten households at random I should expect to find a total of 28 children.

Applied to QM: I write the wave function as a sum of eigenfunctions of an operator A. If ##\psi_i## is the eigenfunction with eigenvalue ##\alpha_i##, then the appearance of a term ##c_i\psi_i## implies that a measurement of A will yield the value ##\alpha_i## with probability ##|c_i|^2##; sum all of these and we'll have the expectation value <A>.

You can work out for yourself that the integral is doing that summation. You'll need to use the facts that:
- ##\int(a+b+c+...)=\int{a}+\int{b}+\int{c}+...##
- ##\int\psi_i^*\psi_j## equals one or zero according to whether i equals j or not.
- ##A\psi_i=\alpha_i\psi_i##, and because ##\alpha_i## is a constant you can move it to the left out of the integral.
THANK YOU! This whole thing makes so much more sense. Now to the standard deviation question.
 
  • #22
Isaac0427 said:
Now to the standard deviation question.
That's basically about the probabilities of seeing deviations from the expectation value. In the example with the children, you wouldn't be amazed to find that a sample of ten households found 27 or 29 children instead of the expected 28... but just how what is the exact probability that the count will be off by one? Or two? Or three? What about if you had sampled ten thousand households instead of ten? What are the chances that you will get 27,000 or 29,000 instead of the expected 28,000?

This is a general question in probability and statistics, and QM is just one of its many applications. Thus, most QM textbooks will assume that you've already learned it in one of your math classes. If you want to understand it in any depth, you might be better off asking in the math forums (although all the science advisors here in QM will have a solid working knowledge of how to use it).
 
  • #23
Nugatory said:
That's basically about the probabilities of seeing deviations from the expectation value. In the example with the children, you wouldn't be amazed to find that a sample of ten households found 27 or 29 children instead of the expected 28... but just how what is the exact probability that the count will be off by one? Or two? Or three? What about if you had sampled ten thousand households instead of ten? What are the chances that you will get 27,000 or 29,000 instead of the expected 28,000?

This is a general question in probability and statistics, and QM is just one of its many applications. Thus, most QM textbooks will assume that you've already learned it in one of your math classes. If you want to understand it in any depth, you might be better off asking in the math forums (although all the science advisors here in QM will have a solid working knowledge of how to use it).
I completely understand the standard deviation, I just don't get how ##\sigma^2(q)=<(Q-<q>)^2>=<(Q-<q>)\psi|(Q-<q>)\psi>##
 
  • #24
$$\sigma^{2} = < (Q - <q>)^{2} > = <\psi|(Q - <q>)^{2}\psi> = <\psi|(Q - <q>)(Q - <q>)\psi> = <(Q - <q>)^{*}\psi | (Q - <q>)\psi> = <(Q - <q>)\psi | (Q - <q>)\psi>$$

To justify the last step, you have to see that if Q is Hermitian, then so is Q - <q>. It follows from bilinearity and the fact that <q> is just a good ol' scalar (and it's real, since Q is Hermitian). Sorry for the mess of <'s and >'s.
 
  • #25
Isaac0427 said:
I completely understand the standard deviation, I just don't get how ##\sigma^2(q)=<(Q-<q>)^2>=<(Q-<q>)\psi|(Q-<q>)\psi>##
sorry - wasn't quite sure where the starting point of the question was.
 
  • #26
Twigg said:
$$\sigma^{2} = < (Q - <q>)^{2} > = <\psi|(Q - <q>)^{2}\psi> = <\psi|(Q - <q>)(Q - <q>)\psi> = <(Q - <q>)^{*}\psi | (Q - <q>)\psi> = <(Q - <q>)\psi | (Q - <q>)\psi>$$

To justify the last step, you have to see that if Q is Hermitian, then so is Q - <q>. It follows from bilinearity and the fact that <q> is just a good ol' scalar (and it's real, since Q is Hermitian). Sorry for the mess of <'s and >'s.
Two questions. How did you get from ##\sigma^2(q)## to ##<(Q-<q>)^2>## (which was my question to begin with) and how did you get from the latter to ##<\psi|(Q-<q>)^2\psi>##?
 
  • #27
Well, if you use LaTex to write your posts, please drop the > and < and use the LaTex commands: \langle and \rangle.
Thus compare

##<\psi,\phi> ## to

## \langle\psi, \phi\rangle##

I saw books written with < and > instead of the 2 commands. My eyes hurt for several days... :(
 
  • Like
Likes vanhees71
  • #28
Isaac0427 said:
How did you get from σ2(q)σ2(q)\sigma^2(q) to <(Q−<q>)2>

I'm having a little trouble seeing where the difficulty is, please bear with me. In ordinary statistics, the standard deviation is defined by ##\sigma_{Q}^{2} = \int (Q - \langle q \rangle)^{2} \rho(x) dx## in which ##\rho(x)## is the probability density, Q is a scalar-valued function, and ##\langle q \rangle = \int Q \rho(x) dx## is the expectation value (average) of the function Q. It sounds like you're already familiar with these results, I'm just throwing that out there to avoid ambiguity. Note also that you can rewrite the definition of the (ordinary) standard deviation as the following average: ##\sigma_{Q}^{2} = \langle (Q - \langle q \rangle)^{2} \rangle = \int (Q - \langle q \rangle)^{2} \rho(x) dx##

If you're still not satisfied with the quantum version ##\sigma^{2}_{Q} = \int \psi^{*} (\hat{Q} - \langle q \rangle)^{2} \psi dx##, which is identical to Griffith's equation ##\sigma^{2}_{Q} = \langle (\hat{Q} - \langle q \rangle)^{2} \rangle##, then I think what's bothering you is why it's safe to just switch out the scalar function Q for the operator ##\hat{Q}##. Is that the case? If so, I can explain this in a few special cases, though I don't have an argument that works in general. The wavefunction is typically defined in position space or momentum space. If Q is either position or momentum, then you can turn the definition of the standard deviation operator that Griffiths uses (##\sigma^{2}_{Q} = \langle (\hat{Q} - \langle q \rangle)^{2} \rangle##) into the ordinary definition (##\sigma_{Q}^{2} = \langle (Q - \langle q \rangle)^{2} \rangle##) by working in the corresponding domain (position space if Q is position, or momentum space is Q is momentum). For instance, if Q is p, then ##\langle (\hat{p} - \langle p \rangle)^{2} \rangle = \int \phi^{*}(p) (p - \langle p \rangle)^{2} \phi(p) dp = \int (p - \langle p \rangle)^{2} |\phi(p)|^{2} dp = \langle (p - \langle p \rangle)^{2} \rangle## since p is just a scalar-valued function in p-space. In these special cases, it is possible to turn the hermitian operator Q into a scalar-valued function, and this makes the quantum version equivalent to the ordinary version of the standard deviation.

Isaac0427 said:
how did you get from the latter to <ψ|(Q−<q>)2ψ><ψ|(Q−<q>)2ψ>?

I'm treating ##(\hat{Q} - \langle q \rangle)^{2}## as an operator in its own right. If you substitute ##\hat{A} = (\hat{Q} - \langle q \rangle)^{2}##, then all I did was say that ##\langle \hat{A} \rangle = \langle \psi | \hat{A} \psi \rangle##, which is just a matter of notation. At the end of the day, what both sides really mean is ##\int \psi^{*} \hat{A} \psi dx##.
 
  • #29
Twigg said:
I think what's bothering you is why it's safe to just switch out the scalar function Q for the operator ^QQ^\hat{Q}. Is that the case?
No, as I said in earlier posts, Q is an opperator that corresponds to the observable q. Q just doesn't have a hat.

Second, what even is the expectation value of an opperator? Is <Q> just another way of saying <q>? That is, is the expectation value of an opperator the expectation value of its corresponding observable?

Twigg said:
I'm having a little trouble seeing where the difficulty is, please bear with me. In ordinary statistics, the standard deviation is defined by σ2Q=∫(Q−⟨q⟩)2ρ(x)dxσQ2=∫(Q−⟨q⟩)2ρ(x)dx\sigma_{Q}^{2} = \int (Q - \langle q \rangle)^{2} \rho(x) dx in which ρ(x)ρ(x)\rho(x) is the probability density, Q is a scalar-valued function, and ⟨q⟩=∫Qρ(x)dx⟨q⟩=∫Qρ(x)dx\langle q \rangle = \int Q \rho(x) dx is the expectation value (average) of the function Q. It sounds like you're already familiar with these results, I'm just throwing that out there to avoid ambiguity.
You keep saying these equations yet my entire question is where did you get that definition of the standard deviation? I get that it is the definition but I don't understand why. Where did it come from?
 
  • #30
Isaac0427 said:
No, as I said in earlier posts, Q is an opperator that corresponds to the observable q. Q just doesn't have a hat.

Second, what even is the expectation value of an opperator? Is <Q> just another way of saying <q>? That is, is the expectation value of an opperator the expectation value of its corresponding observable?

Ok, this is definitely a terminology issue. Sorry if I made anything unclear. To avoid further confusion about terminology, I'll stick empirical examples.

The expectation value of an operator is the average value you would measure. For example, the expectation value of the angular momentum operator (##\langle L_{z} \rangle##) operating on the ground state wavefunction of the hydrogen atom (##|\psi_{100}\rangle##) is just the angular momentum you would measure on average if you had a large sample of (non-interacting) ground state hydrogen atoms. To clarify, if you picked atoms one by one out of this sample and measured their ground state angular momentum, the average of your data would be ##\langle L_{z} \rangle##, aka the expectation value of the angular momentum operator in the ground state.

Isaac0427 said:
You keep saying these equations yet my entire question is where did you get that definition of the standard deviation? I get that it is the definition but I don't understand why. Where did it come from?

My intention was to compare the version that Griffiths uses for the standard deviation of an observable to the standard definition of the standard deviation for a continuous variable, which you mentioned you understand. If this is different from what you're used to, the definitions I've used can be found and are motivated in Sections 5 and 6 of Chapter 15 of the 3rd Edition of Mary Boas's book Mathematical Methods in the Physical Sciences.

To paraphrase briefly, the standard deviation is the average "spread" of a function, in the sense of a least-squares fit. Notice that ## Q - \langle q \rangle ## is the distance of the function Q from it's average (aka expected value), ##\langle q \rangle##. At each point in space (i.e. each value of x), the distance ##Q - \langle q \rangle## can vary. The quantity ##(Q - \langle q \rangle)^{2}##, the square of that distance (as a function of x), is a good measure of how far the function Q has varied from its average (at the point x). To extend this to the whole distribution (whereas before it only applied to a single point x), we take the average of the quantity ##(Q - \langle q \rangle)^{2}##, which is called the variance, ## Var_{Q} = \int (Q - \langle q \rangle )^{2} \rho(x) dx##. The variance is how much the function Q differs from it's average value ##\langle q \rangle##, squared, on average. The standard deviation is defined as the square root of the variance.

Hope this helps!
 
  • #31
Twigg said:
Ok, this is definitely a terminology issue. Sorry if I made anything unclear. To avoid further confusion about terminology, I'll stick empirical examples.

The expectation value of an operator is the average value you would measure. For example, the expectation value of the angular momentum operator (##\langle L_{z} \rangle##) operating on the ground state wavefunction of the hydrogen atom (##|\psi_{100}\rangle##) is just the angular momentum you would measure on average if you had a large sample of (non-interacting) ground state hydrogen atoms. To clarify, if you picked atoms one by one out of this sample and measured their ground state angular momentum, the average of your data would be ##\langle L_{z} \rangle##, aka the expectation value of the angular momentum operator in the ground state.
My intention was to compare the version that Griffiths uses for the standard deviation of an observable to the standard definition of the standard deviation for a continuous variable, which you mentioned you understand. If this is different from what you're used to, the definitions I've used can be found and are motivated in Sections 5 and 6 of Chapter 15 of the 3rd Edition of Mary Boas's book Mathematical Methods in the Physical Sciences.

To paraphrase briefly, the standard deviation is the average "spread" of a function, in the sense of a least-squares fit. Notice that ## Q - \langle q \rangle ## is the distance of the function Q from it's average (aka expected value), ##\langle q \rangle##. At each point in space (i.e. each value of x), the distance ##Q - \langle q \rangle## can vary. The quantity ##(Q - \langle q \rangle)^{2}##, the square of that distance (as a function of x), is a good measure of how far the function Q has varied from its average (at the point x). To extend this to the whole distribution (whereas before it only applied to a single point x), we take the average of the quantity ##(Q - \langle q \rangle)^{2}##, which is called the variance, ## Var_{Q} = \int (Q - \langle q \rangle )^{2} \rho(x) dx##. The variance is how much the function Q differs from it's average value ##\langle q \rangle##, squared, on average. The standard deviation is defined as the square root of the variance.

Hope this helps!
Can you please put this in terms of opperators and observables? Thanks.
 
  • #32
To clarify, I don't see how you can find the difference between ##-i\hbar \partial_x## and ##\hbar k##, and how you can take the expectation value of the former.
 
  • #33
We're at a point where leaving the hats off the operators can lead to serious ambiguity... I know, people do it all the time, but it's sloppy and they only get away with it when everyone in the discussion can figure out from the context when a symbol represents an operator. With latex you put the hats on using \hat - for example, \hat{X} renders as ##\hat{X}##.
 
  • #34
Isaac0427 said:
To clarify, I don't see how you can find the difference between ##-i\hbar \partial_x## and ##\hbar k##, and how you can take the expectation value of the former.

##\hat{p}=-i\hbar \partial_x## is the momentum operator. Its expectation value is ##\int\psi^*\hat{p}\psi=-\frac{i}{\hbar}\int\psi^*\partial_x\psi##.

##\hbar k## is also an operator (albeit a rather trivial one), and its expectation value is ##\int\psi^*\hbar k\psi=\hbar k\int\psi^*\psi=\hbar k##.

If ##\hat{A}## and ##\hat{B}## are both operators, then the operator ##\hat{C}=\hat{A}-\hat{B}## is the operator that satisfies ##\hat{C}\psi=\hat{A}\psi-\hat{B}\psi##. This is the definition of subtraction for linear operators, and you can use it to find the difference between the operators ##-i\hbar \partial_x## and ##\hbar k##.
 
Last edited:
  • #35
Nugatory said:
##\hat{p}=-i\hbar \partial_x## is the momentum operator. Its expectation value is ##\int\psi^*\hat{p}\psi=-\frac{i}{\hbar}\int\psi^*\partial_x\psi##.

##\hbar k## is also an operator (albeit a rather trivial one), and its expectation value is ##\int\psi^*\hbar k\psi=\hbar k\int\psi^*\psi=\hbar k##.

If ##\hat{A}## and ##\hat{B}## are both operators, then the operator ##\hat{C}=\hat{A}-\hat{B}## is the operator that satisfies ##\hat{C}\psi=\hat{A}\psi-\hat{B}\psi##. This is the definition of subtraction for linear operators, and you can use it to find the difference between the operators ##-i\hbar \partial_x## and ##\hbar k##.
So, the expectation value of ##\hat Q## is the same as the expectation value of its eigenvalues, correct? I was using ##\hbar k## not as an opperator but as the eigenvalue of the momentum opperator (again, terminology may not be great but you know what I mean). So, given that, wouldn't the expectation value of ##\hbar k## (as an observable, not an opperator) be the same as the expectation value of ##\hat p##? Evaluating the integral with keeping in mind that the momentum opperator is Hermitian appears to give me momentum or ##\hbar k##. Again, this raises the question on why we can say that the difference between the momentum opperator and the expectation value of momentum has anything to do with the standard deviation.
 

Similar threads

  • Quantum Physics
Replies
9
Views
1K
Replies
10
Views
2K
Replies
1
Views
874
Replies
20
Views
1K
  • STEM Academic Advising
Replies
11
Views
1K
  • Science and Math Textbooks
Replies
8
Views
2K
Replies
10
Views
3K
  • Quantum Physics
Replies
2
Views
2K
Replies
1
Views
1K
Replies
2
Views
1K
Back
Top