Honestly I'm surprised that ket notation hasn't swept over all of linear algebra. It's a great tool for understanding.
For example, I always had trouble doing matrix multiplication by hand. "Is it row col or col row or... which dimensions have to match again?" But with kets that becomes downright trivial.
##|a\rangle \langle b|## is a transformation from ##a## to ##b## (or vice versa), and we can break any matrix ##M## into a sum of these single-value transformations ##M = \sum_{i,j} |i\rangle\langle j| M_{i,j}##. Correspondingly, ##\langle a | b \rangle## is a comparison (dot-product) between ##a## and ##b##. Our matrix breakdown has the property that ##\langle a | a \rangle = 1## while ##\langle a | b \rangle = 0## for ##b \neq a## so matrix multiplication is just...
##M \cdot N##
expand
##= \left(\sum_{i,j} M_{i,j} |i\rangle \langle j| \right) \left(\sum_{k,l} N_{k,l} |k\rangle \langle l | \right)##
combine
##= \sum_{i,j,k,l} M_{i,j} N_{k,l} |i\rangle \langle j| |k\rangle \langle l |##
The ##\langle j| |k\rangle## term discards all parts of the sum where ##j \neq k##:
##= \sum_{i,j,l} M_{i,j} N_{j,l} |i\rangle \langle l |##
group
##= \sum_{i,l} |i\rangle \langle l | \sum_j M_{i,j} N_{j,l}##
meaning
##(M \cdot N)_{i,l} = \sum_j M_{i,j} N_{j,l}##The matrix multiplication definition falls right out of multiplying the ket breakdowns.