I Confusion with Dirac notation in the eigenvalue problem

Summary
Is ##<i| (\Omega - \omega I)|V>=0 ## the same as ##<i|\Omega - \omega I|V>=0 ## ?
Why dotting both sides with a basis bra <i|?
Hi!

I am studying Shankar's "Principles of QM" and the first chapter is all about linear algebra with Dirac's notation and I have reached the section "The Characteristic Equation and the Solution to the Eigenvalue Problem" which says that starting from the eigenvalue problem and equation 1.8.3:
$$ (\Omega - \omega I)|V> = |0> $$ (where ##|V>## is any ket in ## \mathbb V^n(C) ## ).

We operate both sides with ## (\Omega - \omega I)^{-1} ## and we get:
$$ |V> = (\Omega - \omega I)^{-1} |0> $$ But since the inverse only exists when the determinant is non-zero and we don't want the trivial solution, we need to consider the condition: ##det(\Omega - \omega I)=0## to determine the eigenvalues ##\omega##. So, to find them, we (I quote directly from the book) "project Eq. 1.8.3 onto a basis. Dotting both sides with a basis bra ##<i|##, we get:"
$$<i|\Omega - \omega I|V>=0$$ And that's where I am stuck! Everything seems to go the same as in linear algebra with the matrix notation, except that at this point what I would normally do would be to explicitly write and solve ##det(\Omega - \omega I)=0## to find the eigenvalues and then solve 1.8.3 for each of them to get the eigenvectors. Why do we now need to "dot both sides with a basis bra"?

Also, as this is my first encounter with Dirac's notation and despite I think the book does a decent job introducing it, I still think it does not explain some things properly which may result confusing to me, which leads me to my second question:
Is "##<i| (\Omega - \omega I)|V>=0 ##" the same as "##<i|\Omega - \omega I|V>=0 ##" ? They seem to be (following the logic of the book) the same, but (to me) the second expression looks more like a subtraction of a bra times an operator minus a scalar times an identity operator times a ket while the first expression looks more like "one entity". If they indeed are the same, then why disposing the parenthesis? If they are not the same, is there an intuitive way to differentiate them that I am missing?

By the way, if anyone knows a good resource to learn Dirac's notation in a clearer manner, I would highly appreciate if you let me know it because that's with what I am struggling the most.
 

vanhees71

Science Advisor
Insights Author
Gold Member
12,260
4,615
To evaluate the determinant you need a matrix with numbers, and in QT these are given by the matrix elements of the corresponding operator. This of course only works for operators within a finite-dimensional Hilbert space, e.g., for spin states, where for spin ##s## an orthonormal basis is given by eigenvectors of the ##z##-component of the spin ##\hat{s}_z## with eigenvalues ##s_z \in \{-s,-s+1,\ldots,s-1,s \}##.

It's far more usual and convenient to define eigenvectors and eigenvalues of an operator just by the equation
$$\hat{\Omega} |\omega \rangle = \omega |\omega \rangle,$$
where ##\omega## is the eigenvalue and ##|\omega \rangle## a eigenvector to that eigenvalue of the operator ##\hat{\Omega}##. For the observables you deal with self-adjoint operators, and then all eigenvalues must be real, and the eigenvectors to different eigenvalues are orthogonal to each other. For each eigenspace for a fixed eigenvalue you can in addition choose an orthonormalized set of basis vectors (which then are all eigenvectors to this eigenvalue).
 
Ok, got it! That makes sense for my question about the eigenvalue problem! Thanks!

Could you give me any hint about my question regarding Dirac's notation?
 

kith

Science Advisor
1,263
384
Is "##<i| (\Omega - \omega I)|V>=0 ##" the same as "##<i|\Omega - \omega I|V>=0 ##" ?
Yes.

They seem to be (following the logic of the book) the same, but (to me) the second expression looks more like a subtraction of a bra times an operator minus a scalar times an identity operator times a ket [...]
You cannot add bras and kets. Kets are vectors in the Hilbert space while bras are not.

What are bras then? Let's examine your expression. Writing ##\Omega - \omega I|V\rangle## alone would be sloppy notation but the intended meaning is that the linear operator ##\Omega - \omega I## is applied to the vector ##|V\rangle##. This yields a new vector which we might call ##|V'\rangle##. Using this, your equation reads ##\langle i|V'\rangle=0##. This is the inner product between the two vectors ##|i\rangle## and ##|V' \rangle##. In the notation of linear algebra, the expression would read ##\langle i, V' \rangle = 0##.

So the meaning of the bra ##\langle i|## is the following: if you take any vector ##|\phi\rangle## and apply the bra ##\langle i|## to it, you get the inner product of ##|i \rangle## and ##|\phi \rangle##. In other words, bras are not vectors in Hilbert space but a linear functions from the Hilbert space to the complex numbers. So ##\langle i|: \mathcal{H} \rightarrow \mathbb{C}## which could be written like this in the notation of linear algebra: ##\langle i, \cdot \rangle: \mathcal{H} \rightarrow \mathbb{C}##.

[Diving a bit deeper into linear algebra, we realize that the bras themselves also form a vector space ##\mathcal{H}^*## which is the so-called dual space of ##\mathcal{H}##. So it does make sense to call them bra vectors but the important point to understand is that the state vectors of your quantum system are always the kets.]
 
Last edited:

vanhees71

Science Advisor
Insights Author
Gold Member
12,260
4,615
One should add the following.

In physics we are pretty sloppy concerning the math, but Hilbert space is so well-mannered that it almost always works, but sometimes you get confused. So let's elaborate a bit following #4 by @kith .

Indeed you start with the Hilbert space, which is a vector space ##\mathcal{H}## with the scalars being the complex numbers together with a scalar product, which is a positive definite sesquilinear form, i.e., it fulfills
$$\langle \psi|\alpha \phi +\beta \chi \rangle=\alpha \langle \psi|\phi \rangle + \beta \langle \psi|\chi \rangle$$
for all ##|\psi \rangle##, ##|\phi \rangle##, ##|\chi \rangle \in \mathcal{H}## and ##\alpha,\beta \in \mathbb{C}##.
It also Fulfills
$$\langle \psi |\phi \rangle=\langle \phi|\psi \rangle^{*},$$
which implies that the scalar product is semilinear in the first argument
$$\langle \alpha \psi + \beta \phi|\chi \rangle=\alpha^* \langle \psi |\chi \rangle + \beta^* \langle \phi |\chi \rangle.$$
Further you assume (positive definiteness)
$$\langle \psi|\psi \rangle = 0 \Leftrightarrow |\psi \rangle=0.$$
Now the scalar product induces a norm,
$$\|\psi \|=\sqrt{\langle \psi |\psi \rangle},$$
and you can introduce sequences of Hilbert-space vectors and the idea of convergence as usual.

Then to the algebraic axioms you also add some "topological" axioms, with the topology induced by this "natural" norm. In addition you assume the Hilbert space to be complete, i.e., any Cauchy sequence is assumed to converge within Hilbert space:

A sequence ##(|\psi_n \rangle)_{n \in \mathbb{N}}## is a Cauchy sequence, if for each ##\epsilon>0## you can find a number ##N \in \mathbb{N}## such that for all ##n_1,n_2>N##
$$\|\psi_{n_1} - \psi_{n_2} \|<\epsilon.$$

In addition you also assume that there is a countable orthonormal basis, i.e., a set of "unit vectors" ##(|u_n \rangle)_{n \in \mathbb{N}}## with
$$\langle u_{n_1}|u_{n_2} \rangle =\delta_{n_1 n_2}$$
and each Hilbert-space vector can be written as a series of the form
$$|\psi \rangle=\sum_{n=1}^{\infty} \psi_n |u_n \rangle,$$
where the complex numbers
$$\psi_n=\langle u_n|\psi \rangle.$$
Now, as @kith said, for any vector space you can introduce the dual space. Before we turn to the Dirac notation, we have to think a bit. In general, for infinite-dimensional vector spaces, the dual space is not isomorphic to the vector space itself, and in finite-dimensional spaces the one-to-one-mapping is via some arbitrarily chosen basis, i.e., it's not basis independent. The latter changes, if the space has a scalar product, and with some subtle amendment it's also true for Hilbert spaces. Here is the idea (of course, we can't prove anything, let alone in a mathematically rigorous way, in forum postings):

A member of the dual space (let's write for the moment ##\Psi^*## for them) is by definition a linear form, i.e., ##\Psi^*:\mathcal{H} \rightarrow \mathbb{C}## which fulfills
$$\Psi^*(\alpha |\phi \rangle + \beta \chi \rangle)=\alpha \Psi^*|\phi \rangle + \beta \psi^* |\chi \rangle.$$
Now a special form of such a linear form obviously is "induced" by a vector ##|\psi \rangle \in \mathbb{H}## via
$$\Psi_{\psi}^* |\phi \rangle=\langle \psi|\phi \rangle.$$
Now one can prove that for any continuous linear form ##\Phi^*## on ##\mathbb{H}## (in the sense of the topology of ##\mathcal{H}## induced by the scalar product as detailed above) there's a uniquely determined vector ##|\psi \rangle## such that ##\Phi^*=\Psi_{\psi}^*##, and this is a one-to-one mapping between the Hilbert space and its dual, and thus the continuous linear forms can just be identified with ##\mathcal{H}##. In this sense of an isomorphism you have ##\mathcal{H}^*=\mathcal{H}##.

Now, in quantum theory, there's the necessity to deal with non-continuous linear forms as well as non-continuous linear mappings, which are not necessarily defined on the full Hilbert space. This This is the case already in the very beginning (in conventional treatments, where you start with particles and not with spins), i.e., for the position and momentum operators. Let's take the realization of quantum theory in terms of wave functions (the position representation of QT) and let's just deal with motion in one dimension. Then the Hilbert space ##\mathcal{H}=\mathrm{L}^2(\mathbb{R})##, the hilbert space of square-integrable functions, with the scalar product given as
$$\langle \psi |\phi \rangle=\int_{\mathbb{R}} \mathrm{d} x \psi^*(x) \phi(x).$$
The position and momentum operators are defined as
$$\hat{x} \psi(x)=x \psi(x), \quad \hat{p} \psi(x)=-\mathrm{i}/\hbar \partial_x \psi(x).$$
It's clear that these operators are only defined on a subset of all possible square integrable functions, but they are defined on a dense subspace ##D## (e.g., the space of quickly falling ##C^{\infty}## functions). The dual of this dense subspace is larger than the dual of ##\mathcal{H}##. It includes, e.g., the Dirac ##\delta## distribution, and this enables you to define "generalized eigenfunctions" like position eigen functions ##u_{x}(\xi)##, which are "normalized to a ##\delta## distribution",
$$\langle u_{x'}|u_{x} \rangle = \int_\mathbb{R} \mathrm{d} \xi u_{x'}^*(\xi) u_{x}(\xi)=\delta(x-x').$$
The same holds for momentum eigenfunctions ##\tilde{u}_p(x)##, which you can easily find by solving the corresponding differential equation to be
$$\tilde{u}_p(x)=\frac{1}{\sqrt{2 \pi \hbar}} \exp(\mathrm{i} p x/\hbar), \quad p \in \mathbb{R},$$
which also fulfills the corresponding normalization condition
$$\langle \tilde{u}_{p'}|\tilde{u}_{p} \rangle = \int_{\mathbb{R}} \mathrm{d} x \tilde{u}_{p'}^*(x) \tilde{u}_p(x) = \int_{\mathbb{R}} \mathrm{d} x \frac{1}{2 \pi \hbar} \exp[\mathrm{i} (p-p') x/\hbar]=\delta(p-p'),$$
where we have used the integral in the usual sense as introduced in the theory of Fourier transformations.

A formalization of this sloppy physicists's ideas is known as the "rigged Hilbert space". A book very briefly introducing this technique is

L. Ballentine, Quantum Mechanics, Addison-Wesley

For a mathematical more rigorous treatment, see, e.g.,


A good textbook is

A. Galindo, P. Pascual, Quantum Mechanics, Springer (2 vols.)
 
You cannot add bras and kets. Kets are vectors in the Hilbert space while bras are not.
Jeez! That's right! And now it makes sense why it doesn't make sense to consider the expression as a subtraction.

In other words, bras are not vectors in Hilbert space but a linear functions from the Hilbert space to the complex numbers. So ##\langle i|: \mathcal{H} \rightarrow \mathbb{C}## [...]
Shankar mentions that ## \langle i|## is the adjoint of ##|i \rangle ## and that both belong to two vector spaces with a ket for every bra and vice versa. Your explanation is really clear and will help this to stick to my head. Thanks!

@vanhees71 wow, definitely that's a lot to think about! For sure I will come back to check this post when I get deeper into the topic and will check the references you gave. Thanks!
 

Want to reply to this thread?

"Confusion with Dirac notation in the eigenvalue problem" You must log in or register to reply here.

Physics Forums Values

We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving

Hot Threads

Top