Dirac Notation for Vectors and Tensors (Neuenschwander's text ....)

Click For Summary

Discussion Overview

The discussion revolves around the interpretation of Dirac Notation as presented in "Tensor Calculus for Physics" by Dwight E. Neuenschwander. Participants seek clarification on the notation used for vectors and tensors, particularly in the context of finite versus infinite-dimensional spaces and the implications for inner products.

Discussion Character

  • Exploratory
  • Technical explanation
  • Debate/contested
  • Homework-related

Main Points Raised

  • Peter expresses uncertainty about interpreting Dirac Notation, specifically regarding the representation of vectors and inner products.
  • Andrew points out that Peter's assumptions about finite-dimensional vector spaces may not hold, as Dirac's notation is often applied in infinite-dimensional contexts.
  • Andrew emphasizes the need for caution when assuming that inner products can be simplified to basic multiplication of vectors, noting the role of the metric tensor in more general cases.
  • Another participant corrects Peter's notation for the bra vector, stating that it should consist of the complex conjugates of the ket vector components.
  • Participants discuss recommended texts for further understanding of Dirac Notation, with suggestions including "Principles of Quantum Mechanics" by Shankar and "Quantum Mechanics" by Cohen-Tannoudji.
  • Peter inquires about the availability of worked examples and solutions in the recommended texts, indicating a desire for practical resources to aid comprehension.

Areas of Agreement / Disagreement

There is no consensus on the correct interpretation of Dirac Notation as participants express differing views on the dimensionality of vector spaces and the treatment of inner products. The discussion remains unresolved regarding the implications of these interpretations.

Contextual Notes

Participants highlight the importance of distinguishing between abstract vector spaces and their numerical representations, indicating potential limitations in understanding if one is accustomed to finite-dimensional linear algebra.

Who May Find This Useful

Readers interested in quantum mechanics, tensor calculus, and the mathematical foundations of physics may find this discussion relevant, particularly those seeking clarification on Dirac Notation.

Math Amateur
Gold Member
MHB
Messages
3,920
Reaction score
48
I am reading Tensor Calculus for Physics by Dwight E. Neuenschwander and am having difficulties in confidently interpreting his use of Dirac Notation in Section 1.9 ...

in Section 1.9 we read the following:

Neuenschwander 1.9 page 25 .png

Neuenschwander 1.9 page 26 ... .png

I need some help to confidently interpret and proceed with Neuenschwander's notation in the text above ...Indeed I am not sure how to interpret Neuenschwander when he writes:

## 1 = \sum_{ \alpha } \vert \alpha \rangle \langle \alpha \vert ##... ... ... (1.99)

Am I proceeding validly or correctly when i assume that

## | \alpha \rangle =
\begin{bmatrix}
\alpha^1 \\
\alpha^2 \\
\alpha^3
\end{bmatrix}

##

... and

## \langle \alpha | = \begin{bmatrix}
\alpha^1 & \alpha^2 & \alpha^3
\end{bmatrix}
##and when he writes##\vert A \rangle = \sum_{ \alpha } \vert \alpha \rangle \langle \alpha \vert A \rangle ##

... can I assume that it is OK to take## \vert A \rangle = \begin{bmatrix} A^1 \\ A^2 \\ A^3 \end{bmatrix} ##



so that## \langle \alpha \vert A \rangle =

\begin{bmatrix}
\alpha^1 & \alpha^2 & \alpha^3
\end{bmatrix}

\begin{bmatrix}
A^1 \\
A^2 \\
A^3
\end{bmatrix}
## ... and so on ...Am i proceeding correctly ?Hope someone can help ...

Peter
 

Attachments

  • Neuenschwander 1.9 page 26 ... .png
    Neuenschwander 1.9 page 26 ... .png
    22.8 KB · Views: 209
Last edited:
Physics news on Phys.org
Hello Peter.
Broadly you have the correct idea but your approach relies on two implicit assumptions that do not hold in the general case.
Firstly, by writing ##\langle \alpha|## and ##|\alpha\rangle## as each having three components, you are assuming the vector space is finite dimensional. Dirac's notation is most used in quantum mechanics, where the vector spaces are usually infinite-dimensional, so such a representation as a finite set of components does not appear.
Secondly, you replace the inner product ##\langle\alpha|A\rangle## by a simple multiplication of a row vector ##\langle\alpha|## by a column vector ##|A\rangle##. That is only correct when those vectors are representations in an orthonormal basis. From your first embedded image, that seems to be the case here, but you need to exercise care that it will not always be the case. The more general formula requires use of a metric tensor so that ##\langle\alpha|A\rangle = \langle\alpha| \times M \times |\alpha\rangle## where ##M## is the representation of the metric tensor in the given basis. For an orthonormal basis ##M## will be the identity matrix (with an infinite number of rows and columns if the space is infinite-dimensional!) so the calculation collapses to look like what you have shown.
Lastly, to avoid confusion, avoid writing things like $$
\vert A \rangle = \begin{bmatrix} A^1 \\ A^2 \\ A^3 \end{bmatrix}$$
The left-hand side is a vector in the space. The right-hand side is a representation of that vector in a particular basis. The two sides are not the same sort of object and should not be connected by an equals sign.

This can seem confusing if you've been introduced to linear algebra through vectors and matrices that are rows, columns or rectangle of numbers. Vector spaces are abstract objects, of which numeric number tuples are only one type.
 
  • Like
Likes   Reactions: Math Amateur and topsquark
Thanks Andrew … your reply is VERY helpful …

I very much appreciate your help …

Thanks again,

Peter
 
andrewkirk said:
Hello Peter.
Broadly you have the correct idea but your approach relies on two implicit assumptions that do not hold in the general case.
Firstly, by writing ##\langle \alpha|## and ##|\alpha\rangle## as each having three components, you are assuming the vector space is finite dimensional. Dirac's notation is most used in quantum mechanics, where the vector spaces are usually infinite-dimensional, so such a representation as a finite set of components does not appear.
Secondly, you replace the inner product ##\langle\alpha|A\rangle## by a simple multiplication of a row vector ##\langle\alpha|## by a column vector ##|A\rangle##. That is only correct when those vectors are representations in an orthonormal basis. From your first embedded image, that seems to be the case here, but you need to exercise care that it will not always be the case. The more general formula requires use of a metric tensor so that ##\langle\alpha|A\rangle = \langle\alpha| \times M \times |\alpha\rangle## where ##M## is the representation of the metric tensor in the given basis. For an orthonormal basis ##M## will be the identity matrix (with an infinite number of rows and columns if the space is infinite-dimensional!) so the calculation collapses to look like what you have shown.
Lastly, to avoid confusion, avoid writing things like $$
\vert A \rangle = \begin{bmatrix} A^1 \\ A^2 \\ A^3 \end{bmatrix}$$
The left-hand side is a vector in the space. The right-hand side is a representation of that vector in a particular basis. The two sides are not the same sort of object and should not be connected by an equals sign.

This can seem confusing if you've been introduced to linear algebra through vectors and matrices that are rows, columns or rectangle of numbers. Vector spaces are abstract objects, of which numeric number tuples are only one type.
Andrew,

… thanks again for your helpful post …

Do you have any recommended texts on Dirac Notaation for vectors and tensors …?

Peter
 
The ones I have used are in Quantum Mechanics texts.
'Principles of Quantum Mechanics' by Shankar has what seemed to me a nicely-paced introduction to it.
The other text I have that covers it is 'Quantum mechanics' by Cohen-Tannoudji - a highly respected text but it's a big purchase - two heavy volumes!
 
  • Like
  • Informative
Likes   Reactions: PeroK and topsquark
Thanks Andrew …
 
Math Amateur said:
Am I proceeding validly or correctly when i assume that

## | \alpha \rangle =
\begin{bmatrix}
\alpha^1 \\
\alpha^2 \\
\alpha^3
\end{bmatrix}

##

... and

## \langle \alpha | = \begin{bmatrix}
\alpha^1 & \alpha^2 & \alpha^3
\end{bmatrix}
##
To add to @andrewkirk answer, not that what you wrote for the bra is incorrect, it should read
## \langle \alpha | = \begin{bmatrix}
\bar{\alpha}^1 & \bar{\alpha}^2 & \bar{\alpha}^3
\end{bmatrix}
##
i.e., the vector components of the bra are the complex conjugates of those of the ket.
 
  • Like
Likes   Reactions: Math Amateur and topsquark
andrewkirk said:
The other text I have that covers it is 'Quantum mechanics' by Cohen-Tannoudji - a highly respected text but it's a big purchase - two heavy volumes!
I love Cohen-Tannoudji! And they're actually still printing it!

-Dan
 
  • Like
Likes   Reactions: Math Amateur
topsquark said:
I love Cohen-Tannoudji! And they're actually still printing it!

-Dan
Dan,

Does it have enough worked examples?

Does the book provide any solutions to exercises …

‘’These items can help one get a good grasp of theory …

Peter
 
  • #10
andrewkirk said:
The ones I have used are in Quantum Mechanics texts.
'Principles of Quantum Mechanics' by Shankar has what seemed to me a nicely-paced introduction to it.
The other text I have that covers it is 'Quantum mechanics' by Cohen-Tannoudji - a highly respected text but it's a big purchase - two heavy volumes!
Andrew.

Do the Cohen-Tannoudji volumes have a good number of worked examples … ?

Does it have worked solutions to some of the exercises …?

I think these items can help one get a good grasp of the theory …

Peter
 
  • #11
Math Amateur said:
Dan,

Does it have enough worked examples?

Does the book provide any solutions to exercises …

‘’These items can help one get a good grasp of theory …

Peter
It has a number of examples, explicitly worked out, but at that level you will find that the worked examples and the exercises don't overlap all that well. However the text is relentless in its application of Mathematics to the problem of developing Intro QM. (I'm finding as time goes on I'm leaning more and more to Mathematical Physics.) Once you have the notation down the text is very clearly organized and covers practically any loose ends you might run into.

IMHO I wouldn't say it's the best text(s) to learn from but it makes for one heck of a good review. I'm afraid I don't have any source suggestions to add to andrewkirk's post. My introduction to QM (and particularly QFT) was kind of "non-standard."

-Dan
 
  • #12
topsquark said:
It has a number of examples, explicitly worked out, but at that level you will find that the worked examples and the exercises don't overlap all that well. However the text is relentless in its application of Mathematics to the problem of developing Intro QM. (I'm finding as time goes on I'm leaning more and more to Mathematical Physics.) Once you have the notation down the text is very clearly organized and covers practically any loose ends you might run into.

IMHO I wouldn't say it's the best text(s) to learn from but it makes for one heck of a good review. I'm afraid I don't have any source suggestions to add to andrewkirk's post. My introduction to QM (and particularly QFT) was kind of "non-standard."

-Dan
Oh … interesting …

Thanks Dan for those helpful comments and thoughts …

Peter
 
  • #13
andrewkirk said:
Hello Peter.
Broadly you have the correct idea but your approach relies on two implicit assumptions that do not hold in the general case.
Firstly, by writing ##\langle \alpha|## and ##|\alpha\rangle## as each having three components, you are assuming the vector space is finite dimensional. Dirac's notation is most used in quantum mechanics, where the vector spaces are usually infinite-dimensional, so such a representation as a finite set of components does not appear.
Secondly, you replace the inner product ##\langle\alpha|A\rangle## by a simple multiplication of a row vector ##\langle\alpha|## by a column vector ##|A\rangle##. That is only correct when those vectors are representations in an orthonormal basis. From your first embedded image, that seems to be the case here, but you need to exercise care that it will not always be the case. The more general formula requires use of a metric tensor so that ##\langle\alpha|A\rangle = \langle\alpha| \times M \times |\alpha\rangle## where ##M## is the representation of the metric tensor in the given basis. For an orthonormal basis ##M## will be the identity matrix (with an infinite number of rows and columns if the space is infinite-dimensional!) so the calculation collapses to look like what you have shown.
Lastly, to avoid confusion, avoid writing things like $$
\vert A \rangle = \begin{bmatrix} A^1 \\ A^2 \\ A^3 \end{bmatrix}$$
The left-hand side is a vector in the space. The right-hand side is a representation of that vector in a particular basis. The two sides are not the same sort of object and should not be connected by an equals sign.

This can seem confusing if you've been introduced to linear algebra through vectors and matrices that are rows, columns or rectangle of numbers. Vector spaces are abstract objects, of which numeric number tuples are only one type.

andrewkirk said:
Hello Peter.
Broadly you have the correct idea but your approach relies on two implicit assumptions that do not hold in the general case.
Firstly, by writing ##\langle \alpha|## and ##|\alpha\rangle## as each having three components, you are assuming the vector space is finite dimensional. Dirac's notation is most used in quantum mechanics, where the vector spaces are usually infinite-dimensional, so such a representation as a finite set of components does not appear.
Secondly, you replace the inner product ##\langle\alpha|A\rangle## by a simple multiplication of a row vector ##\langle\alpha|## by a column vector ##|A\rangle##. That is only correct when those vectors are representations in an orthonormal basis. From your first embedded image, that seems to be the case here, but you need to exercise care that it will not always be the case. The more general formula requires use of a metric tensor so that ##\langle\alpha|A\rangle = \langle\alpha| \times M \times |\alpha\rangle## where ##M## is the representation of the metric tensor in the given basis. For an orthonormal basis ##M## will be the identity matrix (with an infinite number of rows and columns if the space is infinite-dimensional!) so the calculation collapses to look like what you have shown.
Lastly, to avoid confusion, avoid writing things like $$
\vert A \rangle = \begin{bmatrix} A^1 \\ A^2 \\ A^3 \end{bmatrix}$$
The left-hand side is a vector in the space. The right-hand side is a representation of that vector in a particular basis. The two sides are not the same sort of object and should not be connected by an equals sign.

This can seem confusing if you've been introduced to linear algebra through vectors and matrices that are rows, columns or rectangle of numbers. Vector spaces are abstract objects, of which numeric number tuples are only one type.

andrewkirk said:
Hello Peter.
Broadly you have the correct idea but your approach relies on two implicit assumptions that do not hold in the general case.
Firstly, by writing ##\langle \alpha|## and ##|\alpha\rangle## as each having three components, you are assuming the vector space is finite dimensional. Dirac's notation is most used in quantum mechanics, where the vector spaces are usually infinite-dimensional, so such a representation as a finite set of components does not appear.
Secondly, you replace the inner product ##\langle\alpha|A\rangle## by a simple multiplication of a row vector ##\langle\alpha|## by a column vector ##|A\rangle##. That is only correct when those vectors are representations in an orthonormal basis. From your first embedded image, that seems to be the case here, but you need to exercise care that it will not always be the case. The more general formula requires use of a metric tensor so that ##\langle\alpha|A\rangle = \langle\alpha| \times M \times |\alpha\rangle## where ##M## is the representation of the metric tensor in the given basis. For an orthonormal basis ##M## will be the identity matrix (with an infinite number of rows and columns if the space is infinite-dimensional!) so the calculation collapses to look like what you have shown.
Lastly, to avoid confusion, avoid writing things like $$
\vert A \rangle = \begin{bmatrix} A^1 \\ A^2 \\ A^3 \end{bmatrix}$$
The left-hand side is a vector in the space. The right-hand side is a representation of that vector in a particular basis. The two sides are not the same sort of object and should not be connected by an equals sign.

This can seem confusing if you've been introduced to linear algebra through vectors and matrices that are rows, columns or rectangle of numbers. Vector spaces are abstract objects, of which numeric number tuples are only one type.

Hello Andrew,

Thanks again on your help with Neuenschwander Section 1.9 ... {Note ... A scan of Neuenschwander Section 1.9 from the start to page 26 is available below ...}​
I am hoping you can clarify some issues for me ...​
You write:​
" ... ... Lastly, to avoid confusion, avoid writing things like​
## \vert A \rangle = \begin{bmatrix} A^1 \\ A^2 \\ A^3 \end{bmatrix} ##​
The left-hand side is a vector in the space. The right-hand side is a representation of that vector in a particular basis. The two sides are not the same sort of object and should not be connected by an equals sign.
... ... ... ... "

I note that the author tends to interpret the situation in 3 dimensions ... so for convenience I did the same ...

The author also says the basis ## \alpha ## is orthonormal ( see note directly before equation (1.99) [see scanned text below ... page 25 ...]Given the context of a 3-dimension space and an orthonormal basis ... and further given the fact that we have a given basis ## \alpha ### am I justified in writing something like
## \vert A \rangle = \begin{bmatrix} A^1 \\ A^2 \\ A^3 \end{bmatrix} ##​

... see also authors equation (1.86) which has an expression similar to mine above ... [ see scanned notes below ...]
... on another issue i am completely stumped with the equations directly following equation ( 1.99 ) [ see scanned notes below ... top of page 26 ... ] ... ... that is ...##\vert A \rangle = \sum_{ \alpha } \vert \alpha \rangle \langle \alpha \vert A \rangle ## ... ... ... (*)

In (*) all I can think in terms of spelling out the meaning is to take

## \vert A \rangle = \begin{bmatrix} A^1 \\ A^2 \\ A^3 \end{bmatrix} ##

and​
## | \alpha \rangle =
\begin{bmatrix}
\alpha^1 \\
\alpha^2 \\
\alpha^3
\end{bmatrix}

##

and

## \langle \alpha \vert A \rangle =

\begin{bmatrix}
\alpha^1 & \alpha^2 & \alpha^3
\end{bmatrix}

\begin{bmatrix}
A^1 \\
A^2 \\
A^3
\end{bmatrix}


= \alpha^1 A^1 + \alpha^2 A^2 + \alpha^3 A^3 ##

so ...

## \vert A \rangle \langle \alpha \vert A \rangle = \begin{bmatrix} \alpha^1 \\ \alpha^2 \\ \alpha^3 \end{bmatrix} [ \alpha^1 A^1 + \alpha^2 A^2 + \alpha^3 A^3 ]

= \begin{bmatrix} (\alpha^1)^2 A^1 + \alpha^1 \alpha^2 A^2 + \alpha^1 \alpha^3 A^3 \\ \alpha^2 \alpha^1 A^1 + (\alpha^2)^2 A^2 + \alpha^1 \alpha^3 A^3 \\ \alpha^3 \alpha^1 A^1 + \alpha^3 \alpha^2 A^2 + (\alpha^3)^2 A^3 \end{bmatrix} ##

... however ... why do we need the ## \sum_{ \alpha } ## in ## \sum_{ \alpha } \vert \alpha \rangle \langle \alpha \vert A \rangle ## ... and further ... how should I interpret ## \sum_{ \alpha } A^{ \alpha} \vert \alpha \rangle ## ... Is the above making sense ... ? Is the above reasonable interpretation of Neuenschwander ... ??
Hope you can help ... ?

PeterIt will be helpful if you have access to Neuenschwander Section 1.9 so i have scanned the relevant pages of Section 1.9 ... and the read as follows ...

N ... 1.9 ... Page 23 ... 1 .png

N ... 1.9 ... page 24 ... Part 1 .png


N ... 1.9 ... page 24 ... Part 2 .png


N ... 1.9 ... page 25 ... Part 1 .png


N ... 1.9 ... page 25 ... Part  2 .png


N ... 1.9 ... page 26 .... Part 1 .png


 
Last edited:
  • #14
andrewkirk said:
Hello Peter.
Broadly you have the correct idea but your approach relies on two implicit assumptions that do not hold in the general case.
Firstly, by writing ##\langle \alpha|## and ##|\alpha\rangle## as each having three components, you are assuming the vector space is finite dimensional. Dirac's notation is most used in quantum mechanics, where the vector spaces are usually infinite-dimensional, so such a representation as a finite set of components does not appear.
Secondly, you replace the inner product ##\langle\alpha|A\rangle## by a simple multiplication of a row vector ##\langle\alpha|## by a column vector ##|A\rangle##. That is only correct when those vectors are representations in an orthonormal basis. From your first embedded image, that seems to be the case here, but you need to exercise care that it will not always be the case. The more general formula requires use of a metric tensor so that ##\langle\alpha|A\rangle = \langle\alpha| \times M \times |\alpha\rangle## where ##M## is the representation of the metric tensor in the given basis. For an orthonormal basis ##M## will be the identity matrix (with an infinite number of rows and columns if the space is infinite-dimensional!) so the calculation collapses to look like what you have shown.
Lastly, to avoid confusion, avoid writing things like $$
\vert A \rangle = \begin{bmatrix} A^1 \\ A^2 \\ A^3 \end{bmatrix}$$
The left-hand side is a vector in the space. The right-hand side is a representation of that vector in a particular basis. The two sides are not the same sort of object and should not be connected by an equals sign.

This can seem confusing if you've been introduced to linear algebra through vectors and matrices that are rows, columns or rectangle of numbers. Vector spaces are abstract objects, of which numeric number tuples are only one type.

andrewkirk said:
Hello Peter.
Broadly you have the correct idea but your approach relies on two implicit assumptions that do not hold in the general case.
Firstly, by writing ##\langle \alpha|## and ##|\alpha\rangle## as each having three components, you are assuming the vector space is finite dimensional. Dirac's notation is most used in quantum mechanics, where the vector spaces are usually infinite-dimensional, so such a representation as a finite set of components does not appear.
Secondly, you replace the inner product ##\langle\alpha|A\rangle## by a simple multiplication of a row vector ##\langle\alpha|## by a column vector ##|A\rangle##. That is only correct when those vectors are representations in an orthonormal basis. From your first embedded image, that seems to be the case here, but you need to exercise care that it will not always be the case. The more general formula requires use of a metric tensor so that ##\langle\alpha|A\rangle = \langle\alpha| \times M \times |\alpha\rangle## where ##M## is the representation of the metric tensor in the given basis. For an orthonormal basis ##M## will be the identity matrix (with an infinite number of rows and columns if the space is infinite-dimensional!) so the calculation collapses to look like what you have shown.
Lastly, to avoid confusion, avoid writing things like $$
\vert A \rangle = \begin{bmatrix} A^1 \\ A^2 \\ A^3 \end{bmatrix}$$
The left-hand side is a vector in the space. The right-hand side is a representation of that vector in a particular basis. The two sides are not the same sort of object and should not be connected by an equals sign.

This can seem confusing if you've been introduced to linear algebra through vectors and matrices that are rows, columns or rectangle of numbers. Vector spaces are abstract objects, of which numeric number tuples are only one type.

andrewkirk said:
Hello Peter.
Broadly you have the correct idea but your approach relies on two implicit assumptions that do not hold in the general case.
Firstly, by writing ##\langle \alpha|## and ##|\alpha\rangle## as each having three components, you are assuming the vector space is finite dimensional. Dirac's notation is most used in quantum mechanics, where the vector spaces are usually infinite-dimensional, so such a representation as a finite set of components does not appear.
Secondly, you replace the inner product ##\langle\alpha|A\rangle## by a simple multiplication of a row vector ##\langle\alpha|## by a column vector ##|A\rangle##. That is only correct when those vectors are representations in an orthonormal basis. From your first embedded image, that seems to be the case here, but you need to exercise care that it will not always be the case. The more general formula requires use of a metric tensor so that ##\langle\alpha|A\rangle = \langle\alpha| \times M \times |\alpha\rangle## where ##M## is the representation of the metric tensor in the given basis. For an orthonormal basis ##M## will be the identity matrix (with an infinite number of rows and columns if the space is infinite-dimensional!) so the calculation collapses to look like what you have shown.
Lastly, to avoid confusion, avoid writing things like $$
\vert A \rangle = \begin{bmatrix} A^1 \\ A^2 \\ A^3 \end{bmatrix}$$
The left-hand side is a vector in the space. The right-hand side is a representation of that vector in a particular basis. The two sides are not the same sort of object and should not be connected by an equals sign.

This can seem confusing if you've been introduced to linear algebra through vectors and matrices that are rows, columns or rectangle of numbers. Vector spaces are abstract objects, of which numeric number tuples are only one type.

DrClaude said:
To add to @andrewkirk answer, not that what you wrote for the bra is incorrect, it should read
## \langle \alpha | = \begin{bmatrix}
\bar{\alpha}^1 & \bar{\alpha}^2 & \bar{\alpha}^3
\end{bmatrix}
##
i.e., the vector components of the bra are the complex conjugates of those of the ket.
Thanks DrClaude ... appreciate your help ...

Peter
 
  • #15
In the following I write ##a## instead of ##\alpha## cos it's quicker to type.

Peter, I find it more intuitive to write ##\sum_a |a\rangle\langle a | A\rangle## as
##\sum_a \langle a | A\rangle |a\rangle## because the first factor ##\langle a | A\rangle## in the summand is a scalar and the second is a vector, and we usually do scalar multiplication of vectors on the left.
That formula gives the projection of vector ##|A\rangle## onto the subspace generated by vector ##|a\rangle##.
You may recall from your linear algebra studies that, given a basis for a vector space, we can write any vector as the sum of its projections onto the subspaces generated by each of the basis vectors. Those projections are all orthogonal. We get the sum of those projections by summing over the basis vectors ##|a\rangle##. You can think of ##a## as being the label or index of the vector ##|a\rangle## - that just prevents us from having to write the summing subscript as ##|a\rangle## instead of ##a## which would be just a bit too notation-heavy - ie we can write ##\sum_a## instead of ##\sum{|a\rangle}##.

Summary:
Given a basis whose vectors correspond to a set of labels (also called an "index set") ##\mathscr B##, and a vector ##|A\rangle##, the projection of that vector on a vector ##|a\rangle## for ##a\in\mathscr B## is given by ##\langle a | A\rangle\ |a\rangle##. It follows by the definition of a basis that we can write a vector as the sum of its components in that basis, thus:
$$|A\rangle = \sum_{a\in\mathscr B} \langle a | A\rangle\ |a\rangle$$
For brevity one usually replaces the summation subscript ##a\in\mathscr B## by just ##a## to give the ##\sum_a## the text has.
 
  • Like
Likes   Reactions: Math Amateur and topsquark
  • #16
Re your question about the texts, I looked back at my copies of those two.
They both have exercises. The French one has more exercises than Shankar. OTOH Shankar provides solutions to some of the exercises, whereas the French one does not.
Some on this forum sing the praises of a QM text by Ballentine very highly. I have not read it myself, but I recall noting it, because I respect the opinions of those who were singing its praises.
 
  • Like
Likes   Reactions: topsquark
  • #17
Thanks Andrew ...

Reflecting on what you have written ...

Thanks again,

Peter
 

Similar threads

  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 1 ·
Replies
1
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 29 ·
Replies
29
Views
4K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 24 ·
Replies
24
Views
3K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K
  • · Replies 3 ·
Replies
3
Views
2K