Deriving resolution of the identity without Dirac notation

It's a subtle thing, and I don't fully understand the nuances myself. But one way of looking at it is that we 'identify' ##|n\rangle\langle n|## with ##|n\rangle##, which makes sort of sense because the former maps ##|\psi\rangle## to the latter with a constant of proportionality ##c_n##. The price we pay is that now we must consider ##|n\rangle## as an operator that acts on kets, rather than a ket itself (which is why we use the left-ket notation for eigenstates rather than right-bras), but that's the price we pay for the convenience of the Dirac notation.
  • #1
redtree
285
13
I am familiar with the derivation of the resolution of the identity proof in Dirac notation. Where ## | \psi \rangle ## can be represented as a linear combination of basis vectors ## | n \rangle ## such that:
## | \psi \rangle = \sum_{n} c_n | n \rangle = \sum_{n} | n \rangle c_n ##
Assuming an orthonormal basis, then:
## c_n = \langle n | \psi \rangle ##
Such that:
## | \psi \rangle = \sum_{n} | n \rangle \langle n | \psi \rangle ##
Thus:
## 1 = \sum_{n} | n \rangle \langle n | ##
However, I don't think that I understand the derivation well in enough to derive it without using Dirac notation. Does anyone know a proof of the identity without using Dirac notation, both for discrete and continuous variables?
 
Physics news on Phys.org
  • #2
What do you mean by "without Dirac notation"? Then you need to work in some representation, i.e., choosing a basis as you have done. Take, for example the position representation (wave mechanics) and choose as a basis the generalized momentum eigenfunctions (here we have an example with a purely continuous spectrum). The momentum operator is given by
$$\hat{p} \psi(x)=-\mathrm{i} \partial_x \psi(x).$$
The momentum eigenstates are thus defined by
$$-\mathrm{i}\partial_x u_p(x)=p u_p(x).$$
The solutions are
$$u_p(x)=N_p \exp(\mathrm{i} x p),$$
where only ##p \in \mathbb{R}## are allowed, because otherwise we'd have exponentially growing solutions, which are unphysical. So we are led to the plane waves.

To normalize them in a convenient way, however, there's some trouble, as usual for the continuous spectrum. The reason is that in this case the eigenfunctions are not in the Hilbert space of square integrable functions, but what we now is that the eigenfunctions should be orthogonal in some sense, and indeed from the theory of Fourier transformations we find
$$\langle p'|p \rangle=\int_{\mathbb{R}} \mathrm{d} x \langle p'|x \rangle \langle x|p \rangle=N_{p'}^* N_{p} \int_{\mathbb{R}} \mathrm{d} x \exp[\mathrm{i}(p-p')x]=2 \pi |N_p|^2 \delta(p-p').$$
So the most convenient normalization is to set ##N_p=1/\sqrt{2 \pi}##. Then the completeness relation reads
$$\int_{\mathbb{R}} \mathrm{d} p u_p(x') u_{p}^*(x)=\delta(x-x'),$$
which indeed holds true, using again the theory of Fourier transformations.

That this is indeed the completeness relation, written in the position representation becomes clear via the Dirac formalism: The matrix elements of the unit operator in the position representation are
$$\langle x'|\hat{1}|x \rangle=\langle x'|x \rangle=\delta(x-x')=\int_{\mathbb{R}} \mathrm{d} p \langle x'|p \rangle \langle p|x \rangle=\int_{\mathbb{R}} \mathrm{d} p u_{p}(x') u_p^*(x).$$
 
  • #3
One of the reasons physicists love the Dirac bra-ket formalism is precisely because it avoids the less physically intuitive mathematical machinery of Functional Analysis, but it is in this branch of mathematics that you'll find your answer. So here's an outline of how to make sense of the whole 'resolution of the identity' thing in both the discrete and continuous case without using the Dirac shortcuts...

Let ##\mathcal A## be an arbitrary (real-valued) observable (Energy, Position, Momentum, or whatever). For any subset ##\Delta \subset \mathbb R## of the real line we can ask the question: "Will a measurement of ##\mathcal A## land in ##\Delta##?". This question is itself an observable which (having only the values 0 or 1) is represented by a projection operator we label ##P(\Delta)##. Now all the information concerning ##\mathcal A## is housed in the family ##P(\Delta)## (for all subsets ##\Delta##). This family is called the Projection Valued Measure (PVM) of ##\mathcal A##.

Obviously we must have ##P(\phi)=\mathbb O## and ##P(\mathbb R)=\mathbb I##, but we also get that ##P(\bigcup \Delta_i)=\sum P(\Delta_i)## for disjoint subsets ##\Delta_i##. This in turn means that if we partition the real line by ##\mathbb R=\bigcup \Delta_i## then we have ##P(\bigcup \Delta_i)=P(\mathbb R)=\mathbb I##, which is why we call the family ##P(\Delta)## the resolution of the identity (corresponding to ##\mathcal A##). (Intuitively, the resolution of the identity follows from the simple fact that the outcome of a measurement must turn up somewhere in ##\mathbb R##.)

What does all this abstract stuff have to do with your example? Well, in the nice simple case where we have an observable with a pure discrete spectrum, ##{n=0,1,2,...}##, the only ##\Delta##s that matter are those of the form ##\Delta_n=\{n\}## (since the PVM vanishes outside the spectrum by definition). This leads to the result: ##P(\mathbb R)=P(\{0\})+P(\{1\})+P(\{2\})+\cdots =\sum P(\{n\})=\mathbb I##. Using the Dirac notation ##|n\rangle\langle n| := P(\{n\})##, this just says ##\sum |n\rangle\langle n|=\mathbb I##.

What about the continuous case? Well, define ##P(\lambda):=P(\left(-\infty,\lambda\right])##. Then we can set up a well-defined integral (known as a Stieltjes Integral) with respect to the measure ##dP(\lambda)##, and we find that not only does ##\int dP(\lambda) = \mathbb I## (resolution of the identity) hold in the continuous case, but in the discrete case it yields ##\int dP(\lambda)=\sum P(\{\lambda_i\})=\sum|\lambda_i\rangle\langle \lambda_i|##.

So that's how it works in pure mathematics. And that's why physicists use the Dirac notation instead ;-)
 
  • Like
Likes atyy, vanhees71 and dextercioby
  • #4
What is meant by ## \mathbb O ##?

In the discrete case, why is the outer product ## | n \rangle \langle n | ## equal to P({n})?
 
  • #5
##\mathbb O## is the zero operator, meaning ##\mathbb O\psi = 0## for all vectors ##\psi##. ##| n \rangle \langle n |## is an operator which maps any vector ##|\psi\rangle## to ##| n \rangle \langle n |\psi\rangle=c_n|n\rangle##. In other words, it is a projection operator which maps ##\psi## to its projection in the 'eigenspace' of ##|n\rangle##. And that is exactly what ##P(\{n\})## means; that is, it is the projection operator onto the subspace associated with the eigenvalue ##n##.

The Dirac formalism provides a rather ingenious way of representing projection operators and vectors in a way that let's you (sort of) interchange between the two without thinking.
 
Last edited:
  • #6
Perhaps one should also mention that the "rigged-Hilbert-space formalism" provides a mathematically rigorous and modern alternative to the traditional formalism (which is due to von Neumann's famous book on QT, which is mathematically brillant but physically a bit mediocre; a fate it shares with Weyl's "Raum-Zeit-Materie"). The idea of the rigged Hilbert space makes Dirac's ingenious ideas mathematically rigorous. One should however note that, e.g., the idea of the Dirac ##\delta## distribution reaches back (at least) to Sommerfeld, who introduced it in some work on electrodynamics in 1912 (I'm not sure, whether I can find the actual paper).
 
  • Like
Likes Physics Footnotes
  • #7
If you don't mind, I have two related questions.

First: I understand that the completeness relation holds for basis vectors such that ## \sum_{j=1}^{m} | n_{j} \rangle \langle n_{j} | =\mathbb{I}##. Does it also hold for unit-normalized sets of state vectors as well, where ## | \phi_{j} \rangle = c_{j} |n_{j}\rangle ##, because ##\sum_{j=1}^{m} |c_{j}|^2=1 ##, such that ## \sum_{j=1}^{m} | \phi_{j} \rangle \langle \phi_{j} | =\mathbb{I}##? I assume it does not hold for non-unit-normalized sets of state vectors.

Second: For the continuous case, can one define a function ##f(n)=\int_{\mathbb{R}} \phi_{n} dn ##, unit-normalized such that ##\int_{\mathbb{R}} |f(n)|^2 dn =1##, such that ## | f(n) \rangle \langle f(n) | = 1 ##?
 
Last edited:
  • #8
The standard completeness relation refers to orthonormalized Hilbert-space bases, i.e., a set of vectors with ##|n \rangle##, ##n \in \mathbb{N}##, ##\langle n'|n \rangle=\delta_{n',n}##, for which
$$\sum_n |n \rangle \langle n|=\hat{1}.$$
You can easily check that the vectors must be orthonormalized by applying the completeness relation to one of the vectors, ##|n_0 \rangle##, itself:
$$\sum_n |n \rangle \langle n|n_0 \rangle=\sum_n \delta_{nn_0} |n \rangle=|n_0 \rangle.$$
The next abstraction are generalized bases, which occur if you want to use eigenbases of self-adjoint operators with a continuous spectrum like the position operator. Then you have integrals instead of sums, and the normalization is "to a ##\delta## distribution":
$$\int_{\mathbb{R}} \mathrm{d} x |x \rangle \langle x|=\hat{1}, \quad \langle x'|x \rangle=\delta(x'-x).$$
Sometimes one also uses "overcomplete sets" like the coherent states of the harmonic oscillator, but that's a very specialized topic.
 
  • #9
Thank you for your response. I understand how the completeness relation applies to basis states. I am trying to understand the extension to non-basis states. I made a correction on the first question; the second one still holds:
First: I understand that the completeness relation holds for basis vectors such that ## \sum_{j=1}^{m} | n_{j} \rangle \langle n_{j} | =\mathbb{I}##. Does it also hold for unit-normalized sets of state vectors as well, where ## | \phi_{j} \rangle = c_{j} |n_{j}\rangle ##, where ##\sum_{j=1}^{m} |c_{j}|^2=\mathbb{I} ##, such that ## \sum_{j=1}^{m} | \phi_{j} \rangle \langle \phi_{j} | =\mathbb{I}##?.

Second: For a continuous case, can one define a function ##f(n)=\int_{\mathbb{R}} \phi_{n} dn ## , unit-normalized such that ## \int_{\mathbb{R}} |f(n)|^2 dn =1##, such that ##| f(n) \rangle \langle f(n) | = 1 ##?
 
  • #10
Just to be clear. If n a vector, ##| f(n) \rangle \langle f(n) | =\mathbb{I} ##.
 

1. What is the resolution of the identity?

The resolution of the identity is a mathematical tool used in quantum mechanics to express a quantum state as a sum of eigenstates of a given operator.

2. Why is the resolution of the identity important in quantum mechanics?

The resolution of the identity allows us to simplify complex quantum calculations by breaking down a state into simpler components. It also plays a crucial role in the formulation of quantum mechanical equations and the interpretation of measurement outcomes.

3. What is Dirac notation and how is it related to the resolution of the identity?

Dirac notation, also known as bra-ket notation, is a mathematical notation used in quantum mechanics to represent states, operators, and inner products. It is closely related to the resolution of the identity as it provides a convenient way to express the sum of eigenstates in the resolution of the identity.

4. Can the resolution of the identity be derived without using Dirac notation?

Yes, it is possible to derive the resolution of the identity using only traditional mathematical notation and techniques. However, Dirac notation simplifies the derivation and makes it easier to understand and apply in quantum mechanics.

5. Are there any practical applications of the resolution of the identity?

Yes, the resolution of the identity is used extensively in various quantum mechanical calculations and simulations, including in the development of quantum algorithms and quantum information processing. It is also used in spectroscopy, quantum chemistry, and other areas of physics.

Similar threads

  • Quantum Physics
Replies
8
Views
2K
  • Quantum Physics
Replies
2
Views
973
Replies
17
Views
2K
  • Quantum Physics
Replies
6
Views
2K
Replies
9
Views
1K
Replies
13
Views
8K
Replies
4
Views
2K
Replies
5
Views
2K
Replies
3
Views
2K
Back
Top