What is Braket notation: Definition and 12 Discussions
In quantum mechanics, bra–ket notation, or Dirac notation, is ubiquitous. The notation uses the angle brackets, "
⟨
{\displaystyle \langle }
" and "
⟩
{\displaystyle \rangle }
", and a vertical bar "

{\displaystyle }
", to construct "bras" and "kets" .
A ket looks like "

v
⟩
{\displaystyle v\rangle }
". Mathematically it denotes a vector,
v
{\displaystyle {\boldsymbol {v}}}
, in an abstract (complex) vector space
V
{\displaystyle V}
, and physically it represents a state of some quantum system.
A bra looks like "
⟨
f

{\displaystyle \langle f}
", and mathematically it denotes a linear form
f
:
V
→
C
{\displaystyle f:V\to \mathbb {C} }
, i.e. a linear map that maps each vector in
V
{\displaystyle V}
to a number in the complex plane
C
{\displaystyle \mathbb {C} }
. Letting the linear functional
⟨
f

{\displaystyle \langle f}
act on a vector

v
⟩
{\displaystyle v\rangle }
is written as
⟨
f

v
⟩
∈
C
{\displaystyle \langle fv\rangle \in \mathbb {C} }
.
Assume on
V
{\displaystyle V}
exists an inner product
(
⋅
,
⋅
)
{\displaystyle (\cdot ,\cdot )}
with antilinear first argument, which makes
V
{\displaystyle V}
a Hilbert space. Then with this inner product each vector
ϕ
≡

ϕ
⟩
{\displaystyle {\boldsymbol {\phi }}\equiv \phi \rangle }
can be identified with a corresponding linear form, by placing the vector in the antilinear first slot of the inner product:
(
ϕ
,
⋅
)
≡
⟨
ϕ

{\displaystyle ({\boldsymbol {\phi }},\cdot )\equiv \langle \phi }
. The correspondence between these notations is then
(
ϕ
,
ψ
)
≡
⟨
ϕ

ψ
⟩
{\displaystyle ({\boldsymbol {\phi }},{\boldsymbol {\psi }})\equiv \langle \phi \psi \rangle }
. The linear form
⟨
ϕ

{\displaystyle \langle \phi }
is a covector to

ϕ
⟩
{\displaystyle \phi \rangle }
, and the set of all covectors form a subspace of the dual vector space
V
∨
{\displaystyle V^{\vee }}
, to the initial vector space
V
{\displaystyle V}
. The purpose of this linear form
⟨
ϕ

{\displaystyle \langle \phi }
can now be understood in terms of making projections on the state
ϕ
{\displaystyle {\boldsymbol {\phi }}}
, to find how linearly dependent two states are, etc.
For the vector space
C
n
{\displaystyle \mathbb {C} ^{n}}
, kets can be identified with column vectors, and bras with row vectors. Combinations of bras, kets, and operators are interpreted using matrix multiplication. If
C
n
{\displaystyle \mathbb {C} ^{n}}
has the standard hermitian inner product
(
v
,
w
)
=
v
†
w
{\displaystyle ({\boldsymbol {v}},{\boldsymbol {w}})=v^{\dagger }w}
, under this identification, the identification of kets and bras and vice versa provided by the inner product is taking the Hermitian conjugate (denoted
†
{\displaystyle \dagger }
).
It is common to suppress the vector or linear form from the bra–ket notation and only use a label inside the typography for the bra or ket. For example, the spin operator
σ
^
z
{\displaystyle {\hat {\sigma }}_{z}}
on a two dimensional space
Δ
{\displaystyle \Delta }
of spinors, has eigenvalues
±
{\displaystyle \pm }
½ with eigenspinors
ψ
+
,
ψ
−
∈
Δ
{\displaystyle {\boldsymbol {\psi }}_{+},{\boldsymbol {\psi }}_{}\in \Delta }
. In braket notation one typically denotes this as
ψ
+
=

+
⟩
{\displaystyle {\boldsymbol {\psi }}_{+}=+\rangle }
, and
ψ
−
=

−
⟩
{\displaystyle {\boldsymbol {\psi }}_{}=\rangle }
. Just as above, kets and bras with the same label are interpreted as kets and bras corresponding to each other using the inner product. In particular when also identified with row and column vectors, kets and bras with the same label are identified with Hermitian conjugate column and row vectors.
Bra–ket notation was effectively established in 1939 by Paul Dirac and is thus also known as the Dirac notation. (Still, the braket notation has a precursor in Hermann Grassmann's use of the notation
[
ϕ
∣
ψ
]
{\displaystyle [\phi {\mid }\psi ]}
for his inner products nearly 100 years earlier.)
Starting with finding the probability of getting one of the states will make finding the other trivial, as the sum of their probabilities would be 1.
Some confusion came because I never represented the states ##\pm \textbf{z}\rangle## as a superposition of other states, but I guess you would...
##U_1 \otimes U_2 = (1 i H_1 \ dt) \otimes (1 i H_2 \ dt)##
We can write ##  \phi_i(t) > \ = U_i(t)  \phi_i(0)>## where i can be 1 or 2 depending on the subsystem. The ## U ##'s are unitary time evolution operators.
Writing as tensor product we get
## \phi_1 \phi_2> = (1 i H_1 \ dt) ...
Homework Statement
In the absence of degeneracy, prove that a sufficient condition for the equation below (1), where \lefta'\right> is an eigenket of A, et al., is (2) or (3).
Homework Equations
\sum_{b'} \left<c'b'\right>\left<b'a'\right>\left<a'b'\right>\left<b'c'\right> = \sum_{b',b''}...
Homework Statement
In Sakurai's Modern Physics, the author says, "... consider an outer product acting on a ket: (1.2.32). Because of the associative axiom, we can regard this equally well as as (1.2.33), where \left<\alpha\gamma\right> is just a number. Thus the outer product acting on a ket...
This might be trivial for some people but this has been bothering lately.
If P is momentum operator and p its eigenvalue then the eigenfunction is up(x) = exp(ipx/h). where h is the reduced Planck constant (sorry can't find a way to make the proper notation).
While it can also be proved that...
If x,y,z are the position operators.
Is it true that:
<φxφ> + <φyφ> + <φzφ> = <φ  x+y+z φ> ?
So that if, for example, one wanted to compute <φrφ> (where r =x+y+z), then they would just have to sum the parts.
I know that for scalars, a and b, we have the following...
To me, braket notation just seems much easier and more intuitive than the approach from Griffiths. And yes, I learned QM through a text that used braket notation.
I'm a complete noob with Braket and I've only just started getting to grips with it.
For completeness' sake though (from the book I'm currently reading), I can't seem to find a definition for:
\langle J_z \rangle
Would this just be the "magnitude" of J_z?
Thanks
Ok, here is my question.
When you have < r  i >, this equals Sri. So logically if that is that case, if you had SriSaj this would equal < r  j >< a  j >, right?
If so, then what does < r  j ><a  j > equal? I'm working a problem where I am trying to get a final answer of < r  h  a...
How do you work out the commutator of two operators, A and B, which have been written in bra  ket notation?
alpha = a beta = b
A = 2a><a + a><b + 3b><a
B = a><a + 3a><b + 5b><a  2b><b
The answer is a 4x4 matrix according to my lecturer...
Any help much appreciated...
the question:
Let {u>,v>} be a basis for a linear space, suppose that <uv>=0, then prove that:
Av>=<A>Iv>+\delta Au>
where, A is hermitian operator, and <A>=<vAv>,\delta A= A<A>I
where I is the identity operator.
my attempt at solution:
basically, from the definitions i need...