Solve 0=1 with Dirac's Equation: A Guide

  • Thread starter George Jones
  • Start date
  • Tags
    Dirac
In summary: SPOILER cont. ***...something that might be interesting to you. It involves some ideas that I've been thinking about recently. I looked at the problem for a while and can't find any problem with it. I'm not sure I have the problem right though. It goes like this:Suppose for a moment that the the position observable, X, and the momentum observable, P, are both Hermitian. We know that -i\hbar(d/dx) is Hermitian, so then there exists some function, f(x), such that (-i\hbar)(d/dx)f(x) = Xf(x). This means that if we multiply both
  • #1
George Jones
Staff Emeritus
Science Advisor
Gold Member
7,643
1,598
Dirac Proves 0 = 1

Suppose [itex]A[/itex] is an observable, i.e., a self-adjoint operator, with real eigenvalue [itex]a[/itex] and normalized eigenket [itex] \left| a \right>[/itex]. In other words,

[tex]A \left| a \right> = a \left| a \right>, \hspace{.5 in} \left< a | a \right> = 1.[/tex]

Suppose further that [itex]A[/itex] and [itex]B[/itex] are canonically conjugate observables, so

[tex] \left[ A , B \right] = i \hbar I,[/tex]

where [itex]I[/itex] is the identity operator. Compute, with respect to [itex]\left| a \right>[/itex], the matrix elements of this equation divided by [itex]i \hbar[/itex]:

[tex]
\begin{equation*}
\begin{split}
\frac{1}{i \hbar} \left< a | \left[ A , B \right] | a \right> &= \left< a | I | a \right>\\
\frac{1}{i \hbar} \left( \left< a | AB | a \right> - \left<a | BA | a \right> \right) &= <a|a>.
\end{split}
\end{equation*}
[/tex]

In the first term, let [itex]A[/itex] act on the bra; in the second, let [itex]A[/itex] act on the ket:

[tex]\frac{1}{i \hbar} \left( a \left< a | B | a \right> - a \left<a | B | a \right> \right)= <a|a>.[/tex]

Thus,

[tex]0 = 1.[/tex]

This is my favourite "proof" of the well-known equation [itex]0 = 1[/itex].

What gives?

In order not spoil other people's fun, it might be best to put "spoiler" at the top of any post that explains what's happening.

Regards,
George
 
Last edited:
Physics news on Phys.org
  • #2
George Jones said:
Suppose [itex]A[/itex] is an observable, i.e., a self-adjoint operator, with real eigenvalue [itex]a[/itex] and normalized eigenket [itex] \left| a \right>[/itex]. In other words,

[tex]A \left| a \right> = a \left| a \right>, \hspace{.5 in} \left< a | a \right> = 1.[/tex]

Suppose further that [itex]A[/itex] and [tex]B[/itex] are canonically conjugate observables, so

[tex] \left[ A , B \right] = i \hbar I,[/tex]

where [itex]I[/itex] is the identity operator. Compute, with respect to [itex]\left| a \right>[/itex], the matrix elements of this equation divided by [itex]i \hbar[/itex]:

[tex]
\begin{equation*}
\begin{split}
\frac{1}{i \hbar} \left< a | \left[ A , B \right] | a \right> &= \left< a | I | a \right>\\
\frac{1}{i \hbar} \left( \left< a | AB | a \right> - \left<a | BA | a \right> \right) &= <a|a>.
\end{split}
\end{equation*}
[/tex]

In the first term, let [itex]A[/itex] act on the bra; in the second, let [itex]A[/itex] act on the ket:


[tex]\frac{1}{i \hbar} \left( a \left< a | B | a \right> - a \left<a | B | a \right> \right)= <a|a>.[/tex]

Thus,

[tex]0 = 1.[/tex]

This is my favourite "proof" of the well-known equation [itex]0 = 1[/itex].

What gives?

In order not spoil other people's fun, it might be best to put "spoiler" at the top of any post that explains what's happening.

Regards,
George

I don't think you can do that because A and B don't commute?
 
  • #3
Super Nade said:
I don't think you can do that because A and B don't commute?

That step is OK.

One way to see this is to take |b> = A|a> and |c> = B|a>, and then to consider <b|c>.


Any is to to look at (AB)^* = B^* A^* = B A, which takes care of the order of the operators.

Regards,
George
 
  • #4
Isn't this the one about the domains of the operators?
 
  • #5
selfAdjoint said:
Isn't this the one about the domains of the operators?

I don't think the problem is with domains. I think it is possible for the intersection of the domains of A, B, and [A , B] to be dense, and to still have the proof be "true".

Regards,
George
 
  • #6
Interesting, but the proof is based on an assumption that A and B are canonically conjugate observables. Therefore 0=1 is constrained to that condition.

how does <a|[A,B]|a> = <a|AB|a> - <a|BA|a>

btw?
 
  • #7
Spoiler Below!

What a wonderful proof! I have never seen this one before, George. My discussion is below.
***SPOILER***

Think about the real line where we can represent the algebra by the usual quantum mechanical operators X and P. The key is to realize that X and P have no normalizable eigenvectors! The usual "normalization" for position "eigenstates" (lots of scare quotes) is [tex] \langle x | x' \rangle = \delta(x-x')[/tex], so let's have some fun with this formula. Since X and P are canonically conjugate we have that [tex] [X,P] = i \hbar [/tex], and we can take matrix elements of both sides. The right side is [tex] \langle x | i \hbar | x' \rangle = i \hbar \delta(x-x') [/tex]. The left side is [tex] (x - x')( - i \hbar \frac{d}{dx} \delta(x-x')) [/tex] where I have used [tex] \langle x | P = - i \hbar \frac{d}{dx} \langle x | [/tex]. Thus we appear to have stumbled onto the rather cute identity [tex] - x \frac{d}{dx} \delta(x) = \delta(x) [/tex]. Go ahead, try it under an integral, it actually works! I love such silly little formulae between wildly singular objects.

A further amusing challenge:
It isn't always true that the derivative operator has no eigenstates. Suppose you look at the derivative operator on a finite interval. It turns out that the Neumann indices are (1,1), and thus self adjoint extensions exist which are parameterized by a phase (the boundary condition). One can now find proper eigenfunctions and eigenvalues for a given self adjoint extension of the derivative operator. Are we therefore back to proving that 0 = 1 or what?
 
Last edited:
  • #8
waht said:
Interesting, but the proof is based on an assumption that A and B are canonically conjugate observables. Therefore 0=1 is constrained to that condition.

Except you can always find such A and B, so you can always find 0=1... :)

waht said:
how does <a|[A,B]|a> = <a|AB|a> - <a|BA|a>

btw?

It's just the definition of the commutator and linearity of the inner product:
[tex]\langle a | [A,B] | a\rangle = \langle a | (AB-BA) | a \rangle = \langle a | AB | a \rangle - \langle a | BA | a \rangle[/tex].

Physics Monkey, I've got a question about your spoiler below...







*** SPOILER cont. ***




I suspected (based on X and P :smile:) that delta distributions would enter into it, since we end up with [tex]\frac{1}{i\hbar}(a-a)\langle a|B|a\rangle=1[/tex] so it is clear that [tex]\langle a|B|a\rangle[/tex] must be ill-defined (i.e. infinite) to get something like "[tex]0\cdot\infty=1[/tex]." Recovering the definition of the derivative of the delta was neat. What I still don't see though is what the flaw in the proof is in the case of discrete operators...?

George, I thought of another 'interpretation' of the 'proof' too: you could prove 0=ih => h=0 => things aren't quantized :biggrin:
 
  • #9
Physics Monkey said:
A further amusing challenge:
It isn't always true that the derivative operator has no eigenstates. Suppose you look at the derivative operator on a finite interval. It turns out that the Neumann indices are (1,1), and thus self adjoint extensions exist which are parameterized by a phase (the boundary condition). One can now find proper eigenfunctions and eigenvalues for a given self adjoint extension of the derivative operator. Are we therefore back to proving that 0 = 1 or what?

So, you want to take A = P and B = X for the Hilbert space of square-integrable functions on the closed interval [0 , 1], say.









SPOILER for Physic Monkey's Challenge.

It looks like, appropriately, selfAdjoint was right - domains are important. For the operator PX, operating by X on an eigenfunction of P results in a function that is not in the domain of selfadjointness for P, so P cannot be moved left while remaining to be P.

Easy direct calculations in this example reveal a lot.

As I said in another thread, if A and B satisfy [A , B] = ihbar, then at least one of A and B must be unbounded. In example of functions on the whole real line, both X and P are unbounded, while for functions on [0 ,1], X is bounded and P is unbounded.

Regards,
George
 
  • #10
Physics Monkey said:
What a wonderful proof! I have never seen this one before, George.

Time to come clean!

I lifted (and addded a liitle elaboration) this example from the Chris Isham's nice little book Lectures on Quantum Theory: Mathematical and Structural Foundations.

My discussion is below.

Very interesting discussion!

Regards,
George
 
  • #11
George Jones said:
It looks like, appropriately, selfAdjoint was right - domains are important. For the operator PX, operating by X on an eigenfunction of P results in a function that is not in the domain of selfadjointness for P, so P cannot be moved left while remaining to be P.
Very good, George. The commutator is indeed ill defined on the momentum eigenstates.

George Jones said:
I lifted (and addded a liitle elaboration) this example from the Chris Isham's nice little book Lectures on Quantum Theory: Mathematical and Structural Foundations.
Well then, I think I might have to take a look at Isham's book.

George Jones said:
Very interesting discussion!
Thanks for the interesting post!

P.S. To all you readers out there, I can't resist telling about some nice physical applications of such ideas. It turns out that the self adjoint extensions of the momentum operator on a finite interval describe physically the problem of a particle on a ring with a magnetic field through the ring. This is in turn equivalent to imposing a 'twisted' boundary condition [tex]\psi(x+L) = e^{i \alpha} \psi(x) [/tex] on the wavefunction for a particle on a ring with no magnetic field. But there's more! Impurities in a metal can localize electronic states and cause a metal to become an insulator. One way to tell if you have localized states is to look at how sensitive such states are to the boundary conditions of your sample. The above ideas can then be applied, and you can relate the question of localization to the behavior of the system under an applied magnetic field (a problem which can be attacked with perturbation theory). And you thought self adjoint extensions were dull! Shame on you. :tongue:
 
  • #12
First, if "0 = 1" is true then QM completely falls apart, sort of like proof by contradiction, and "0=1" is certainly a contradiction. That tells me that the various proofs must be incorrect, or most physicists have been living like Alice in Wonderland.

The problem is that P X | x> is not equal to P|x> x. As in, go to an x position representation in which P = -i d/dx. That is,

P X |x> = -i d/dx x |x> = (-i + {-i x d/dx})|x>

Delta functions and domaines are not at issue

Sometimes abstraction can lead even the best astray.

Think about Wick's Thrm, which would not hold if "0 =1" were true, nor would many standard manipulations of creation and destruction operators be legitimate. .

(For the abstract truth about momentum operators see Hille and Phillips, Functional Analysis and Semi Groups, Chap XIX, which discusses translation operators (d/dx) in great and highly rigorous detail. The authors demonstrate that there really is not a problem with such operators.

Again, if "0=1" then QM is inherently mathematically trustworthy, which seems to me to be a completely absurd idea.

Regards,
Reilly Atkinson
 
  • #13
Sometimes abstraction can lead even the best astray.
Any notation can lead people astray. But abstraction has the advantage that there are fewer messy details, which means less opportunities to make mistakes, and less possibility for those mistakes to be obscured.

Avoiding abstraction certainly doesn't prevent one from making mistakes...


P X |x> = -i d/dx x |x> = (-i + {-i x d/dx})|x>
such as overworking your variables. :smile: The x in d/dx is not the same as the x as in |x>; the former is the coordinate variable of the position representation, and the latter is a constant denoting which position eigenstate we've selected.

If I relabel the variables so x is no longer being overworked, we're looking at -i d/dx x |a>. (And don't forget that x |a> = a |a>)


You could rewrite George's entire post in the A-representation (so that A = x, and B = -ih d/dx), but that doesn't resolve the paradox: you still wind up with 0 = 1.


Think about Wick's Thrm, which would not hold if "0 =1" were true, nor would many standard manipulations of creation and destruction operators be legitimate. .
That's not accurate: if 0=1 were true, then everything is true. (And simultaneously false)


Again, if "0=1" then QM is inherently mathematically trustworthy, which seems to me to be a completely absurd idea.
I'm completely confused by this.
 
Last edited:
  • #14
Can we go over this again, slowly ? This is something that has bothered me for a little while.
Physics Monkey said:
To all you readers out there, I can't resist telling about some nice physical applications of such ideas. It turns out that the self adjoint extensions of the momentum operator on a finite interval describe physically the problem of a particle on a ring with a magnetic field through the ring. This is in turn equivalent to imposing a 'twisted' boundary condition [tex]\psi(x+L) = e^{i \alpha} \psi(x) [/tex] on the wavefunction for a particle on a ring with no magnetic field.
Is L the circumference of the ring ? Does this not destroy the single-valuedness of [itex]\psi(x)[/itex]? Or is that what is being probed ?

I think I've drunk too deep from the cup of Periodic BCs, what with all the goodies like flux quantization in SCs and Brillouin zones in crystals that it has thrown up like so many marshmallows!
But there's more! Impurities in a metal can localize electronic states and cause a metal to become an insulator. One way to tell if you have localized states is to look at how sensitive such states are to the boundary conditions of your sample. The above ideas can then be applied, and you can relate the question of localization to the behavior of the system under an applied magnetic field (a problem which can be attacked with perturbation theory). And you thought self adjoint extensions were dull! Shame on you. :tongue:
Help me understand this, please.

Let's start with a simple case : the Anderson hamiltonian for non-interacting electrons in a cubic lattice.

The Hamiltonian consists of your favorite on-site disorder potential and the usual hopping term (nn, say). You then apply the above boundary condition to the single-particle eigenfunction in one or more directions. Ignoring what this means for now, this allows you to Taylor expand the eigenvalues [itex]E_i(\alpha) [/itex] and look at the coefficients of higher order terms in [itex]\alpha[/itex]. The deviations from 0 of these coefficients is what you call the phase sensitivity? If that's true, how exactly is this a "measure" of localization? Is the point to extract a dimensionless number (like T/U) and looking for a scaling law? And if not, what happens next?
 
Last edited:
  • #15
George Jones,
Physics Monkey,

Would that "0=1" contradiction be a proof that no finite-dimentional matrix could satisfy the commutation relation [tex] \left[ A , B \right] = i \hbar I[/tex] ?

Would it possible to see that easily for two-dimentional matrices?

Michel
 
  • #16
The problem is indeed one of domains of definition, its the last step in the sequence that is erroneous. Ask yourselves, what *is* the operator AB or BA and where and what are they defined on?

Most of this is easily demystified if you recall the spectral theorem. For general operators, you usually are confronted not just with discrete or continuous spectra, but instead you have that + a bunch of other stuff, often called the residual spectrum. All bets are off when confronted with this, you can't just use naive physicist language of functional analysis in those cases.
 
  • #17
lalbatros said:
Would that "0=1" contradiction be a proof that no finite-dimentional matrix could satisfy the commutation relation [tex] \left[ A , B \right] = i \hbar I[/tex] ?

Would it possible to see that easily for two-dimentional matrices?

It does appear to be a proof by contradiction, at least for observables. If A is not Hermitean then [tex]\langle a|A=(A^\dagger|a\rangle)^\dagger\neq (A|a\rangle)^\dagger[/tex], so acting A to the left in the term [tex]\langle a|AB|a\rangle[/tex] doesn't yield [tex]a\langle a|B|a\rangle[/tex], as required to obtain the contradiction 0=1. Your conclusion is correct anyway, it's just not proven by this example (unless I missed something else, it is late...).

For 2D matrices, if A and B are completely arbitrary then

[tex]A=\left(\begin{array}{cc} a & b \\ c & d\end{array}\right),\
B=\left(\begin{array}{cc} w & x \\ y & z\end{array}\right),\
AB-BA=\left(\begin{array}{cc} bz-cy & b(w-x)+y(a-d) \\ c(x-w)+z(d-a) & -(bz-cy)\end{array}\right).[/tex]

Since the 1-1 entry is the negative of the 2-2 entry, this can never be proportional to the identity matrix.
 
  • #18
reilly said:
First, if "0 = 1" is true then QM completely falls apart, sort of like proof by contradiction, and "0=1" is certainly a contradiction. That tells me that the various proofs must be incorrect, or most physicists have been living like Alice in Wonderland.

The proofs that 0 = 1 are certainly incorrect!

reilly said:
The problem is that P X | x> is not equal to P|x> x. As in, go to an x position representation in which P = -i d/dx.

I'm afraid this isn't true. It doesn't matter what P is, if X hits the state [tex] |x\rangle [/tex] first, then you can replace X with x.

reilly said:
Delta functions and domaines are not at issue

No, these things really are the relevant issues.
 
  • #19
Hi Gokul,

Do you want to hear just the story about the application to localization, or the whole story including the explanation of the paradox? I'm just going to talk about localization for the moment, but I'm happy to say something else if you want.

To start with, the physical system is a piece of material in d dimensions of typical size L. I'll talk in one dimensional terms because this is easiest to understand, but the theory generalizes easily. The physical geometry is not periodic, although what we will eventually imagine is putting lots of these intervals of length L next to each other. As I indicated above, it is a technical fact that the momentum operator [tex] P = - i \frac{d}{dx} [/tex] is not self adjoint on such a finite interval. This is easy to understand from the fact that the equations [tex] P \psi = \pm i k \psi [/tex] have perfectly good solutions in the Hilbert space. As an aside, notice how this situation is modified if the interval is infinite [tex] (-\infty, \infty)[/tex]. In this case, neither equation has a solution in the Hilbert space (of square integrable functions), and the momentum operator on the real line is called essentially self adjoint.

There is then some mathematical procedure for fixing the momentum operator up by defining what is called a self adjoint extension. This extension is characterized by a phase which can be identified with the boundary condition of your sample [tex] \psi(x+L) = e^{i \alpha} \psi(x) [/tex]. In other words, your new fixed up momentum operator only makes sense on functions that satisfy this property. The physical geometry isn't periodic so there really aren't any issues about multivaluedness here. My original description is somewhat confusing on this point, so sorry about that. You can think of this phase as more like the Bloch factor [tex] e^{i k a} [/tex] that obtains from translating a Bloch state by one lattice spacing. The important physical realization is that this weird phase factor can be mapped via a gauge transformation to the problem of a particle on a ring with periodic boundary conditions and a magnetic field. The wavefunction can be thought of as being multivalued, with branches labeled by the winding number, but you are protected from unpleasantness by gauge invariance. Mmmmm marshmallows.

Now for some physics. The Anderson model is a good place to start, and your understanding is quite right. This twisted boundary condition is imposed with the physical idea that somehow sensitivity to boundary conditions will tell you whether states are extended or localized. You then map the problem to the equivalent system on a ring with a magnetic field. The vector potential is propotional to [tex] \alpha [/tex], and it makes sense to do perturbation theory in order to understand how the twisted boundary condition effects states. You would be interested in comparing something like the variance of [tex] \frac{\partial^2 E_i(\alpha)}{\partial \alpha^2} [/tex] to the typical level spacing [tex] \Delta [/tex]. For a given realization of disorder, you can work out something like [tex] \frac{\partial^2 E_i(\alpha)}{\partial \alpha^2} \sim \sum_{j \neq i} \frac{1}{L^2} \frac{|\langle i | P/m | j \rangle|^2}{E_i - E_j} [/tex] plus some constant term you don't really care about.

To estimate the variance we can replace the energy denominator with the level spacing and the velocity matrix elements with some kind of typical velocity scale. Such matrix elements also enter into the Kubo formula for conductivity, so you can use the conductivity to estimate the typical matrix element. The Einstein formula for conductivity is [tex] \sigma = 2 e^2 N(0) D / \Omega [/tex] where D is the diffusion constant, and with the help of the Kubo formula you can easily estimate [tex] v^2 \sim D / N(0) = D \Delta [/tex] where [tex] N(0) [/tex] is the density of states per unit energy at the Fermi surface. The variance is then simply [tex] D/L^2 [/tex] which is called the Thouless energy [tex] E_T[/tex]. The sensitivity to boundary conditions is given in terms of the ratio [tex] \frac{E_T}{\Delta} [/tex].

Actually, this ratio has a very direct physical meaning. Go back to the Einstein formula for the conductivity. Freshman physics says the conductance is related to the conductivity by [tex] G(L) = \sigma L^{d-2} [/tex]. We can define a dimensionless conductance [tex] g(L) [/tex] by multiplying the conductance by the resistance quantum [tex] \sim 1/e^2 [/tex]. The result is that the dimensionless conductance is given by [tex] g(L) \sim \frac{N(0)}{L^d} D L^{d-2} = \frac{D}{L^2} \frac{1}{\Delta} = \frac{E_T}{\Delta} [/tex]. So the sensitivity to boundary conditions is just determined by the dimensionless conductance [tex] g [/tex]! The physical picture is now quite nice. If we have localized states then the system is an insulator and g should be very small which we interpret as saying the states are insensitive to boundary conditions. If we have extended states then the system is a metal and g should be large which we interpret as pronounced sensitivity to boundary conditions.

With the understanding that localized and extended states can be characterized in terms of the dimensionless conductance, we can now build our scaling theory and get all the usual fun results. The added bonus here is that you can solve the Anderson model easily enough and directly compute the Thouless energy and level spacing. It's especially easy in d = 1, and you can verify the prediction of the scaling theory that g always goes to zero as L grows large. And all this can be phrased in the language of self adjoint extensions!

Refs:
Thouless has a number of papers on this sensitivity to boundary conditions idea. The book by Imry also has something about this in it as I recall.
 
Last edited:
  • #20
Thanks for the response, PM.

I've only gotten half-way through it, and won't likely find more time until later tonight, but I wanted to let you know that I've seen this.

I'll reply later. And if this is distracting (for others) from the rest of the thread, I could request that it be split off into a new thread.

PS : Most everything Thouless has written in this field involves the RG! :frown:
 
  • #21
Hurkyl -- First thanks for your final correction; I meant to say that if "0=1" then QM is mathematically UNTRUSTWORTHY, and hence totally unreliable. That's a big stretch, in my opinion.

Not always, but generally abstraction flows from details, and, always, the devil is in the details. And, in my experience as a teacher and as a researcher in QFT and particle physics, abstraction can indeed lead people astray, unless they are highly sophisticated and experienced, or they are at the genius level. I encountered abstraction generated mistakes quite frequently in oral exams -- particularly in mechanics. For example, people might try to solve, with no success, a simple problem of identifying forces on a ladder leaning against a wall with a Lagrangian approach. Kids who did it the time-honored freshman physics way usually got it right. (This happened to some of my graduate classmates at Stanford, and to some of my students at Tufts. I say this to demonstrate that we are not talking dummies, but very bright students.) A great example, of abstraction coming after details in my opinion, is the history of modern QM notation before and after Dirac.

Another place to learn about the growth of abstraction is in the history of E&M, see Whittaker's two volume History of the Aether and Electricity. Dirac could not have done what he did without the details of QM as worked out by Schrodinger, et al. Laguerre and Hermite polynomials are all about details, messy details at that. There's a lot of detail in Mackey, Gleason and Rieffel's work on the math of QM and, and, of course, such details are done in the name of getting to abstraction.

As far as my "proof", I realized after the fact, that it is discussed in great detail in Dirac's QM book, Chapter IV, The Quantum Conditions, all about Poisson Brackets, representations, and commutators. Dirac is certainly harder to refute than me, and he shows, quite clearly, that the "0=1" proof is simply wrong, along the lines I suggested. Sorry, but the history of both math and physics shows the great importance of details, messy or not, and the dangers of abstraction, unless grounded in details.

Your criticism of my proof is, as Dirac shows, is baseless. First, in the position rep,

X P - PX -> x(-id/dx) - (-id/dx)x ==+i

If that is true, then necessarily <W| XP-PX|W> = +i for any normalized state W.
(Again, it's Chapter IV in Dirac's treatise.) The same is true, as it has to be, in the momentum representation in which X=+id/dp, and so on.

Again, if "0=1" is true, then QM is not worth a dime.

Hurkyl said:
Any notation can lead people astray. But abstraction has the advantage that there are fewer messy details, which means less opportunities to make mistakes, and less possibility for those mistakes to be obscured.

Avoiding abstraction certainly doesn't prevent one from making mistakes...
such as overworking your variables. :smile: The x in d/dx is not the same as the x as in |x>; the former is the coordinate variable of the position representation, and the latter is a constant denoting which position eigenstate we've selected.

If I relabel the variables so x is no longer being overworked, we're looking at -i d/dx x |a>. (And don't forget that x |a> = a |a>)You could rewrite George's entire post in the A-representation (so that A = x, and B = -ih d/dx), but that doesn't resolve the paradox: you still wind up with 0 = 1.
That's not accurate: if 0=1 were true, then everything is true. (And simultaneously false)
I'm completely confused by this.

As well you should be, and, again, my apologies for sloppy writing.Haelfix -- Please note that Hille and Phillips, published as one of the volumes in the American Mathematical Society Colloquium Publications, was one of the preeminent books in the Functional Analysis field for many years after its initial publication in 1948. There are, in fact, three types of spectra, continuous, point, and residual. The translation operator-id/dx has both continuous and point spectra, and no residual spectra.

Ralph Phillips was my Functional Analysis Prof, and I can assure you that he was the purest of mathematicians, as his book will attest -- much of the book, and his course deals primarily with analysis, including operational calculii mostly in the context of Banach Spaces and Banach algebras, hardly the day-to-day language of physics math. Further, if you look carefully, say at Chapter XIX of H&P, you'll find that physicists have pretty much got it right when it comes to translation operators --- P=-i d/dx, or X=i d/dp.I challenge anyone to read Dirac's Chapter IV and still maintain that the "0=1" proof has any merit.

Regards,
Reilly
 
  • #22
I'm wondering if this is an example of Godel's theorem. Where in some formal system you get a statement that is true but unprovable or in this case a provable statement that is fasle.

Can it be applied here?
 
  • #23
reilly, no one here (not George, not PM, not haelfix...) is saying that the statement or the proof is correct. It is presented as a paradox just like any of the other "0=1 paradoxes".

The point of the thread is to challenge the audience to find the flaw in the argument.
 
  • #24
lalbatros said:
Would that "0=1" contradiction be a proof that no finite-dimentional matrix could satisfy the commutation relation [tex] \left[ A , B \right] = i \hbar I[/tex] ?
Michel

Yes.

Below, I fill in some details of an argument in Reed and Simon which shows that if [itex] \left[ A , B \right] = i \hbar I[/itex], then at least one of [itex]A[/itex] and [itex]B[/itex] must be unbounded, and thus they must operate on an infinite-dimensional space, since all linear operators on finite-dimensional spaces are bounded.

Suppose that [itex]A[/itex] and [itex]B[/itex] are both self-adjoint, and that [itex] \left[ A , B \right] = AB - BA = i I[/itex]. Then,

[tex]
\begin{equation*}
\begin{split}
A^{n} B &= A^{n-1} A B\\
&= A^{n - 1} \left( BA + iI \right)\\
&= A^{n - 1} BA + iA^{n - 1}.
\end{split}
\end{eqaution*}
[/tex]

Repeatedy doing this, or induction, shows that

[tex]i n A^{n - 1} = A^{n} B - B A^{n},[/tex]

and then the triangle inequality gives

[tex]n \left\| A^{n - 1} \right\| \leq 2 \left\| A^{n} \right\| \left\| B \right\|.[/tex]

Consequently, [itex]2 \left\| A \right\| \left\| B \right\| \geq n[/itex] for every [itex]n[/itex], and thus at least one of [itex]A[/itex] and [itex]B[/itex] must be unbounded.

The Hellinger-Toeplitz theorem that if, e.g.,

[tex]\left< \psi | A \phi \right> = \left<A \phi | \psi \right>[/tex]

for every [itex]\phi[/itex] and [itex]\psi[/itex] in a Hilbert space, then [itex]A[/itex] is bounded.

Therefore, this theorem and the above analysis give that [itex] \left[ A , B \right][/itex] cannot have the whole Hilbert space as its domain.

Again, as selfAdjoint originally pointed out, domains are important.
 
Last edited:
  • #25
waht said:
I'm wondering if this is an example of Godel's theorem. Where in some formal system you get a statement that is true but unprovable or in this case a provable statement that is fasle.

Can it be applied here?

No, Godel's theorems have no role here.

I have read an amusing anecdote about quantum mechanics and Godel's theorems. Once, Wheeler and Thorne went to Godel's office office and asked him about possible relationships between his theorems and the uncertainty priniciple. Godel threw them out of his office!
 
  • #26
George,

I write my conclusion here to help me remember: bracketing an operator should be done by a real mathematician or by a careful physicist (real is only an option then).

If I choose to be careful, I would say that I can bracket the commutator in this paradox, but only carefully on a finite sub-domain (dimension n). In this case, I expect the conjugation relation to be altered since it is clearly impossible on a finite domain (thus: [An,Bn] = ihCn <> ihI). Therefore I also expect that taking the limit (n=>inf) properly will remove the contradiction.

Now we still need a mathematician (maybe you again, George) to show us how the commutation relation is modified on a sub-domain. And in addition we should also be teached how to bracket correctly.

Thanks for the explanation,
and thanks for the mathematician that will help me (us),

Michel
 
Last edited:
  • #27
Your criticism of my proof is, as Dirac shows, is baseless.
My criticism was just that you wrote down something nonsensical.


But I do have a criticism of your "proof" -- you are "resolving" this pseudoparadox by ignoring half of it. :tongue:


It's certainly true that XP - PX = i, and for any normalized state |y>, we have <y| (XP - PX) |y> = i... but that's only half of the story.

If we compute the same thing two different ways, we should get the same answer. If we have a normalized eigenstate |a> of the position operator, then we are completely justified in the following computation:

<a|(XP - PX)|a> = <a|XP|a> - <a|PX|a> = <a|aP|a> - <a|Pa|a> = 0

But yet we know that <a|(XP - PX)|a> = i

And the pseudoparadox is that we've (rigorously) computed the same quantity, but gotten two different answers.


The flaw lies in the hypothesis that "we have a normalized eigenstate |a> of the position operator", since no such thing exists. If we assume we have such a thing, then both of the above computations are correct.
 
  • #28
Gokul43201 said:
I've only gotten half-way through it, and won't likely find more time until later tonight, but I wanted to let you know that I've seen this.

Outstanding! Cool stuff isn't it?

Gokul43201 said:
PS : Most everything Thouless has written in this field involves the RG! :frown:

Yeah, Thouless likes the ol' RG. But then, what's not to like? :!)

P.S. Speaking of RG, when you finish Shankar's RMP article, you should take a look at this: http://arxiv.org/abs/cond-mat/0406174 . The RG really shines here, as Shankar is able to obtain the Eliashberg equations for the transition temperature of a strong coupled superconductor by starting from a Fermi liquid state without any assumption of symmetry breaking! And all in about six pages! I almost cried when I read it.
 
  • #29
Gokul -- Thanks. Sometimes I just don't get it. But, I had fun thinking about the problem.

Physics Monkey -- Yours is one of the best and informative posts I've seen in the forum. Thanks.

Regards,
Reilly
 
  • #30
George Jones said:
I have read an amusing anecdote about quantum mechanics and Godel's theorems. Once, Wheeler and Thorne went to Godel's office office and asked him about possible relationships between his theorems and the uncertainty priniciple. Godel threw them out of his office!
That's special!
 
  • #31
Gokul43201 said:
That's special!

They weren't the first, nor possibly the last, to be thrown out of Godel's office. I think he classed HUP with SF. :biggrin: At any rate I have been told any attempt to derive physical (i.e. material) consequences from the theorem aroused his ire.

Ernies
 

1. What is Dirac's equation?

Dirac's equation is a mathematical formula developed by physicist Paul Dirac to describe the behavior of relativistic particles, such as electrons. It combines principles from both quantum mechanics and special relativity.

2. How does Dirac's equation relate to solving 0=1?

Dirac's equation does not directly relate to solving 0=1. The equation is a fundamental tool in understanding the behavior of particles, but it does not provide a solution to the mathematical contradiction of 0=1.

3. Can Dirac's equation be used to prove that 0=1?

No, Dirac's equation cannot be used to prove that 0=1. The equation is a mathematical model that accurately describes the behavior of particles, but it does not have the power to change fundamental mathematical principles.

4. Why is solving 0=1 with Dirac's equation important?

Solving 0=1 with Dirac's equation is not important in a mathematical sense, as it is not possible. However, exploring the implications of this contradiction can lead to a deeper understanding of the limitations of our current mathematical and scientific theories.

5. Is there a practical application for Dirac's equation?

Yes, Dirac's equation has many practical applications in the fields of particle physics, quantum mechanics, and solid-state physics. It has been used to make predictions about the behavior of particles and has played a crucial role in the development of modern technology, such as transistors and computer chips.

Similar threads

Replies
2
Views
564
  • Quantum Physics
Replies
5
Views
497
Replies
3
Views
372
  • Quantum Physics
Replies
31
Views
1K
Replies
1
Views
751
Replies
2
Views
639
Replies
11
Views
1K
Replies
3
Views
855
Replies
6
Views
852
Replies
5
Views
785
Back
Top