# I A few questions about Griffiths book

1. Aug 17, 2016

### Isaac0427

In Griffiths intro to quantum mechanics, there are a few things that I feel like he gets from nowhere, he just states it and doesn't derive it or prove it.

First is equation 3.114, using opperators to get the expectation value of an observable. I get how he got the inner product from the integral, but I don't get how he got the integral in the first place.

Second, in equation 3.116 which is later used in section 3.4, how does he get that definition for the standard deviation?

Also, can we please not focus on the technicalities-- for example, if "derive" is not the correct term, you still know what I mean, and can you not use an entire post correcting me? I'd like to focus this thread on my two questions.

Thanks!

2. Aug 17, 2016

### vanhees71

Can you cite the formulae? In my 2nd edition of the book, there are no Eqs. 3.114 and 3.116 :-(.

3. Aug 17, 2016

### Demystifier

The OP uses the 1st edition, which substantially differs from the 2nd edition.

The integral related to [3.114] is obtained from [1.17]. The standard deviation is defined in [1.19].

Last edited: Aug 17, 2016
4. Aug 17, 2016

### vanhees71

Well, I'm not so convinced about this book, given the many questions in this forum, which turn out to originate from confusion caused by inaccuracies. Unfortunately, I don't have the 1st edition.

5. Aug 17, 2016

### Isaac0427

Neither of those have anything to do with my questions.

6. Aug 17, 2016

### Isaac0427

The first addition is free online.

7. Aug 17, 2016

### Jilang

It might help to think about the integral as a sum first. Think how you might write the expectation of an eigenvalue if there were only three possible eigenstates and eigenvalues.

8. Aug 17, 2016

### dextercioby

I doubt it is a legit copy. Yes, the second edition already appeared, but the original publisher won't give up on his copyright of the first, because it is a source of cash, even while not in print anymore. Perhaps Griffiths got the permission from the publisher to disseminate on his website the draft of his first edition, but that's also highly unlikely. AFAIK, it is the other way around. Some teachers (Carroll, Srednicki, Teschl, etc.) first publish notes on their website, thus for free, then later refine them into a book which they get published. If we don't have the cash for the book, at least we can still use the original notes.

9. Aug 17, 2016

### Staff: Mentor

Could you write those two equations in latex and post them? That might get this past the digression about different versions.....

(Actually I'm pretty sure that I can guess what they are but if I happen to guess wrong, or just assume a different notation than you're looking at, I'll end up introducing unnecessary noise into the thread - so let's do it right).

10. Aug 17, 2016

### dextercioby

If you use the 1st Edition, then Demistifier's post is 100% correct, but probably needs more elaboration, for you.

11. Aug 17, 2016

### strangerep

Oh my gawd (referring to both the string of non-answer posts in this thread, AND to the way Griffiths does things). <sigh>

The integral is a generalization of formulas he already wrote in ch1, e.g., eq(1.28). Unfortunately, he wrote: $$\langle x\rangle ~=~ \int_{-\infty}^{+\infty} x |\Psi(x,t)|^2 dx ~.$$ But to relate this to the integral before (3.114) one needs to re-express (1.28) as $$\langle x\rangle ~=~ \int_{-\infty}^{+\infty} \Psi^* x \Psi \, dx ~,$$and think of the $x$ as an operating acting to its right.

Imho, it's better to follow a development where (3.114) in abstract Hilbert space notation comes much earlier.

This is actually "standard deviation of the mean", otherwise known as Variance. That Wiki article explains variance and (if you look up Wiki's info on standard deviation, you'll (eventually) get down a section showing its relation to the variance. It's basically the ordinary theory of probability, except that expectation values are expressed in terms of quantum states.

[Aside: I sense you're kinda jumping into all this stuff ahead of a "normal" person's schedule, but you might like to take a look at Ballentine's "QM -- A Modern Development". Even though that's a graduate-level text, if you can follow basic calculus and linear algebra, you might be able to read him. He develops lots of this stuff better than Griffiths (imho).]

12. Aug 17, 2016

### Isaac0427

See, I completely understand equation 1.28 but not 3.114. Are they the same thing should the opperator $\hat x$ be Hermitian? If yes, why use one over the other? If no, when would you use each of them and again, where does he get equation 3.114?

13. Aug 17, 2016

### strangerep

3.114 is just a more general notation, which was already used earlier in ch3, such as in sect 3.1.2 "Inner Products". (Heh, did you study that section or skim over it?)

The operator $\hat x$ is indeed Hermitian.

Why use one over the other? One must bear in mind the distinction between "abstract Hilbert space" and "concrete Hilbert space". For a general dynamical system, important observables are position and momentum. We can express them in general (abstract) terms on an abstract Hilbert space, (since the canonical commutation relations give us an abstract algebra to work with). We can also employ various powerful theorems about Hilbert spaces and Hermitian operators: e.g., the eigenvalues of Hermitian operators are necessarily real, and the eigenvectors of an Hermitian operator span the Hilbert space (meaning that any vector in the Hilbert space can be expressed as a linear combination of those eigenvectors). Such general theorems are often powerful enough to let us deduce the spectrum of the observables associated to a particular dynamical scenario. (By spectrum, I mean the set of eigenvalues associated to each Hermitian operator.) However, if we wish to consider specific cases, a concrete Hilbert space may be more useful, e.g., a function space in which the Hilbert inner product is represented as an integral. The concrete Hilbert space is essentially the abstract space equipped with more detail appropriate to the scenario being considered.

A good example is quantum angular momentum. One can derive the (surprising, counter-intuitive) result that the angular momenta of quantum particles are always integers or half-integers, using only the requirement that the usual 3D group of spatial rotations be represented as unitary operators on an abstract Hilbert space. The more one mediates upon that, the more astonishing it seems (imho).

14. Aug 17, 2016

### Isaac0427

I did study it. But, because he says that if T is Hermitian (I know, I'm leaving off the hat), then T<a|b>=<Ta|b>=<a|Tb>. Given that, I am wondering why we write the integral one way as opposed to the other, and also why we don't say that <q>=Q ∫|ψ|2dx where Q is the Hermitian operator associated with the observable q.

15. Aug 17, 2016

### strangerep

I'd bet (or, at least, hope) that he doesn't actually say that. Where are you reading this from? (If you're thinking of eq 3.29, that's not what it says.)

$Q$ is an operator: it takes one vector and gives you another. You can't just pull it outside the inner product.

Do you understand that the concrete version of $\langle \Psi | \Phi \rangle$ is $$\int_{-\infty}^\infty \Psi^* \Phi \, dx ~~ ?$$ And also that the concrete version of $\langle \Psi | Q \Phi \rangle$ is $$\int_{-\infty}^\infty \Psi^*(x,t) \, x \, \Phi(x,t) dx ~~ ?$$ And the concrete version of $\langle \Psi | \hat p \Phi \rangle$ is $$\int_{-\infty}^\infty \Psi^*(x,t) \, \left( \frac{\hbar}{i} \, \frac{\partial}{\partial x} \Phi(x,t) \right) dx ~~ ?$$

[Edit: watch out for my edits in the above.]

16. Aug 17, 2016

### Twigg

The first third of that is false. Try that for the observable x with <a| = <$\psi$| and |b> = |$\psi$> and you'll get <x> = x (since <$\psi$|$\psi$> = 1), which makes no sense. That would be saying "the expected value of x is x". The expected value of x should be a number, not an observable. The second two thirds of that equation are correct though.

Given that if Q is hermitian $Q^{*} = Q$, then $$<q> = \int \psi^{*} (Q \psi) dx = \int \psi^{*} Q \psi dx = \int \psi^{*} Q^{*} \psi dx = \int (Q \psi)^{*} \psi dx$$

It doesn't matter which way you want to write it, but $\int \psi^{*} Q \psi dx$ should be familiar.

The reason why Griffiths can say $<x> = \int x |\psi|^{2} dx$ is because x is a real-valued function, so it can be pulled out like a constant factor inside the integral. But it can't be pulled outside the integral, because it is a (very simple) function of the integration variable. Other operators, like momentum, can't be pulled in front of the psi's like x can. Try evaluating both $\int \psi^{*} (p \psi) dx$ and $\int (p \psi)^{*} \psi dx$, keeping in mind that $\psi$ goes to 0 at infinity. You'll need to do integration by parts to show they're equivalent.

17. Aug 18, 2016

### vanhees71

Well, it's the OP's own fault not even citing the equations he is discussing and not even provide complete information about the book he is referring to. Concerning Griffiths's QM textbook I couldn't agree more. There are tons of good books out there. So why are so many using this one? My recommendation is to start with Sakurai and then have a look at Ballentine.

Last edited by a moderator: Aug 18, 2016
18. Aug 18, 2016

### Isaac0427

$\Psi$ is an infinite sum of wavefunctions $\psi_j$ that have a definite value of q (i.e. an eigenfunction of Q) $q_j$. The integral can be evaluated as
$\int \Psi^*\sum_{j=1}^{\infty}p_j\psi_jdx=\int \sum_{j=1}^{\infty}\Psi^*p_j\psi_jdx$, correct? How do you go from this to <q>?

19. Aug 18, 2016

### vanhees71

I don't understand the notation. If it's about the expectation value of momentum, given that the particle is prepared in a state described by a wave function $\psi$ you get
$$\langle p \rangle=\langle \psi|\hat{p} \psi \rangle=\int_{\mathbb{R}} \mathrm{d} x \langle \psi|x \rangle \langle x|\hat{p} \psi \rangle = \int_{\mathbb{R}} \mathrm{d} x \psi^*(x) \frac{1}{\mathrm{i} \hbar} \partial_x \psi(x).$$
I have used that the momentum operator in the position representation is given by
$$\langle x|\hat{p} \psi \rangle=\frac{1}{\mathrm{i} \hbar} \partial_x \psi(x), \quad \psi(x)=\langle x|\psi \rangle.$$

You can derive this from the commutator relation
$$[\hat{x},\hat{p}]=\mathrm{i} \hbar \hat{1}.$$
To this end the heuritistics is that the momentum operator is the generator of spatial translations, which means that
$$|x + \delta x \rangle-|x \rangle=-\delta x \frac{\mathrm{i}}{\hbar} \hat{p} |x \rangle,$$
which implies by dividing by $\delta x$ and letting $\delta x \rightarrow 0$
$$\partial_x |x \rangle=-\frac{\mathrm{i}}{\hbar} \hat{p} |x \rangle$$
or
$$\hat{p} |x \rangle=-\frac{\hbar}{\mathrm{i}} \partial_x |x \rangle.$$
$$\hat{p} \psi(x):=\langle x|\hat{p} \psi \rangle=\langle \hat{p} x|\psi \rangle=\frac{\hbar}{\mathrm{i}} \partial_x \psi(x).$$
For the momentum eigenstate you get
$$\hat{p} u_p (x)=\frac{\hbar}{\mathrm{i}} \partial_x u_p(x)=p u_p(x).$$
This differential equation has the solution
$$u_p(x)=N_p \exp \left (\frac{\mathrm{i} x p}{\hbar} \right).$$
To normalize the state we define
$$\langle p|p ' \rangle=\delta(p-p') \; \Rightarrow \; \int_{\mathbb{R}} \mathrm{d} x u_p^*(x) u_{p'}(x)=N_p^* N_{p'} \int_{\mathbb{R}} \mathrm{d}x \exp \left (\frac{\mathrm{i}x(p'-p)}{\hbar} \right)=|N_p|^2 2 \pi \hbar \delta(p-p') \; \Rightarrow \; N_p=\frac{1}{\sqrt{2 \pi \hbar}}.$$
This finally gives
$$u_p(x)=\frac{1}{(2 \pi \hbar)^{1/2}} \exp \left (\frac{\mathrm{i} p x}{\hbar} \right).$$

20. Aug 18, 2016

### Staff: Mentor

The expectation value <q> is just the weighted sum of each possible value of q times its probability. If 20% of the families on my street have three children and 80% have two children, the expectation value for the number of children in any given family is $.2*\times{2}+.8\times{3}=2.8$; that doesn't mean that there exists any household with 2.8 children, but rather that if I pick ten households at random I should expect to find a total of 28 children.

Applied to QM: I write the wave function as a sum of eigenfunctions of an operator A. If $\psi_i$ is the eigenfunction with eigenvalue $\alpha_i$, then the appearance of a term $c_i\psi_i$ implies that a measurement of A will yield the value $\alpha_i$ with probability $|c_i|^2$; sum all of these and we'll have the expectation value <A>.

You can work out for yourself that the integral is doing that summation. You'll need to use the facts that:
- $\int(a+b+c+...)=\int{a}+\int{b}+\int{c}+....$
- $\int\psi_i^*\psi_j$ equals one or zero according to whether i equals j or not.
- $A\psi_i=\alpha_i\psi_i$, and because $\alpha_i$ is a constant you can move it to the left out of the integral.