A ket is a member of a complex
Hilbert space H, i.e. it's just a vector written in a funny way. A bra is a member of H*, the
dual space of H. H* is defined as the set of all bounded linear functionals on H, with addition of vectors and multiplication by a scalar defined in the obvious way:
(f+g)(x)=f(x)+g(x)
(af)(x)=a(f(x))
These definitions give H* the structure of a vector space.
A functional f<img src="/styles/physicsforums/xenforo/smilies/arghh.png" class="smilie" loading="lazy" alt=":H" title="Gah! :H" data-shortname=":H" />\rightarrow\mathbb C is said to be bounded if there exists a real number M such that |fx|\leq M\|x\| for all x in H. Note that I'm using the notational convention that says that we write fx instead of f(x) when f is linear. It's pretty easy to show that a linear functional (any linear operator actually) is bounded if and only if it's continuous. (
Link). So H* can be defined equivalently as the set of all continuous linear functionals on H.
Let's write the inner product of x and y as (x,y). The physicist's convention is to let the inner product be linear in the second variable and antilinear in the first. The Riesz representation theorem (which is easy to prove (
link) if you know the projection theorem already) says that for each f in H, there's a unique x
0 in H* such that f(x)=(x_0,x) for all x, and that this x
0 satisfies \|x_0\|=\|f\|. The norm on the right is the operator norm, defined by \|f\|=\sup_{\|x\|=1}\|fx\|. The map f\mapsto x_0 is a bijection from H* into H, so there's exactly one bra for each ket, and vice versa. It's not a vector space isomorphism though, because it's antilinear rather than linear, as you can easily verify for youself. (A function T:U\rightarrow V, where U and V are complex vector spaces, is said to be antilinear if T(ax+by)=a^*Tx+b^*Ty, for alla vectors x,y and all complex numbers a,b).
We can use this antilinear bijection to define an inner product on H*. Let x' and y' be the bras that corresponds to the kets x and y respectively (via the bijection mentioned above). We define (x',y')=(x,y). This definition gives H* the structure of a Hilbert space, and ensures that the antilinear bijection we defined preserves distances between points. The norm on H* defined by the inner product is consistent with the operator norm that we used before, because
\|x'\|^2_{vector}=(x',x')=(x,x)=\|x\|^2=\|x'\|^2_{operator}
where the norm on the left is the one defined by the inner product, and the one one the right is the operator norm. The last equality follows from the Riesz theorem, as mentioned above.
So far I've been writing the kets as x,y, etc. From now on I'll write them as |\alpha\rangle,\ |\beta\rangle, etc. The bra in H* that corresponds to the ket |\alpha\rangle (via the antilinear bijection mentioned above) is written as \langle\alpha|. Note that we have
(|\alpha\rangle,|\beta\rangle)=\langle\alpha|(|\beta\rangle)=\langle\alpha||\beta\rangle=\langle\alpha|\beta\rangle
The first equality is what we get from the Riesz theorem. The second is the notational convention for linear functions that I mentioned above. The third is another notational convention that I haven't explained yet. We just drop one of the vertical lines to make it look nicer.
Note that the right-hand side isn't the scalar product of \alpha and \beta (those symbols aren't even defined) or a "scalar product" of the bra \langle\alpha| with the ket |\beta\rangle (that concept hasn't been defined). It's the scalar product of the kets |\alpha\rangle and |\beta\rangle, or equivalently, the bra \langle\alpha| acting on the ket |\beta\rangle.
Everything else is defined to make it look like we're just multiplying things together with an associative multiplication operation. For example, the expression |\alpha\rangle\langle\alpha| is defined as the operator that takes an arbitrary ket |\beta\rangle to the ket \langle\alpha|\beta\rangle|\alpha\rangle. This definition can be expressed as
(|\alpha\rangle\langle\alpha|)|\beta\rangle=|\alpha\rangle(\langle\alpha|\beta\rangle)
if we allow ourselves to write the scalars on the right. The convention is of course to allow that, so we would write both the left-hand side and the right-hand side of this equation as |\alpha\rangle\langle\alpha|\beta\rangle.
Here's an easy exercise: Define the expression \langle\alpha|A, where A is an operator, in a way that's consistent with what I just said.
Note that nothing I have said so far tells you how to make sense of expressions such as
\int da|a\rangle\langle a|=1
which includes "eigenvectors" of an operator that doesn't
have any eigenvectors. I still don't fully understand how to make sense of those myself, but I'm working on it. A full understanding includes knowledge about how to prove at least one of the relevant spectral theorems. This is the sort of stuff that you might see near the end of a 300-page book on functional analysis.