# Help! We have forgotten how to write math stuff

WannabeNewton
Re Einstein index notation, I also like Penrose's generalization to "abstract index notation". It looks a lot like the usual Einstein notation, but its meaning generalizes to infinite-dimensional spaces.
Yeah Wald uses the notation throughout his text so I've grown rather fond of the abstract index notation whilst working through the text.

Recently, Ben Niehoff recommended to me a text on classical gauge fields (Rubakov) which I did manage to get my hands on and in it Einstein notation is used in a way that makes my blood boil In particular the author writes, for example, $a_{i}b_{i}$ instead of $a_{i}b^{i}$ when implying summation. It is quite infuriating lol.

"...the verdict of major mathematicians like J.Dieudonne is devastating [5]: “When one gets to the mathematical theories which are at the basis of quantum mechanics, one realizes that the attitude of certain physicists in the handling of these theories truly borders on the delirium. [...] One has to wonder what remains in the mind of a student who has absorbed this unbelievable accumulation
of nonsense, a real gibberish! It should be to believe that today’s physicists are only at
ease in the vagueness, the obscure and the contradictory."
After reading this I got the mental image that all physicists were high when doing their work xD.

I like Serena
Homework Helper
@_@ Have a heart ILS!!!!! geez. I tend to go for the latter tho.
Hey LiXe!

Actually, there is also a 3rd possibility.
If we follow the rules of algebra with function composition $\sin^{-2}x$ should be $\arcsin(\arcsin x)$.

Fredrik
Staff Emeritus
Gold Member
Recently, Ben Niehoff recommended to me a text on classical gauge fields (Rubakov) which I did manage to get my hands on and in it Einstein notation is used in a way that makes my blood boil In particular the author writes, for example, $a_{i}b_{i}$ instead of $a_{i}b^{i}$ when implying summation. It is quite infuriating lol.
I actually prefer the "everything downstairs" notation when we're just doing matrix multiplication (e.g. when we're dealing with Lorentz transformations in SR). For example, if A and B are square matrices, there's no reason to dislike the notation
$$\operatorname{Tr}(A^TB)=(A^TB)_{jj}=(A^T)_{ji}B_{ij}=A_{ij}B_{ij}.$$ What makes Rubakov's notation weird is that when he writes $F_{\mu\nu}F_{\mu\nu}$, he doesn't mean $\operatorname{Tr}(F^TF)$ (where F is the matrix with components $F_{\mu\nu}$), he means $\operatorname{Tr}(F^T\eta F)$. That's what's messed up, not that all the indices are downstairs. Edit: See my next post for a correction.

Last edited:
WannabeNewton
I actually prefer the "everything downstairs" notation when we're just doing matrix multiplication (e.g. when we're dealing with Lorentz transformations in SR). For example, if A and B are square matrices, there's no reason to dislike the notation
$$\operatorname{Tr}(A^TB)=(A^TB)_{jj}=(A^T)_{ji}B_{ij}=A_{ij}B_{ij}.$$ What makes Rubakov's notation weird is that when he writes $F_{\mu\nu}F_{\mu\nu}$, he doesn't mean $\operatorname{Tr}(F^TF)$ (where F is the matrix with components $F_{\mu\nu}$), he means $\operatorname{Tr}(F^T\eta F)$. That's what's messed up, not that all the indices are downstairs.
I agree that for matrices it certainly is a perfectly fine way to write it but I didn't realize that's what he meant with his notation. That's quite evil haha

Fredrik
Staff Emeritus
Gold Member
Looks like I was a bit careless. He says that $F_{\mu\nu}F_{\mu\nu}$ denotes what we'd normally write as $F_{\mu\nu}F^{\mu\nu}$. This is equal to $\operatorname{Tr}(F^T\eta^{-1}F\eta^{-1})$ if $F$ denotes the matrix with components $F_{\mu\nu}$.

AlephZero
Homework Helper
I think we should go back to Laplace's notation for partial derivatices. Replace $\displaystyle \frac{\partial y}{\partial x}$ with $\displaystyle \left( \frac{dy}{dx} \right)$.

And higher deriiatives are (obviously!) $\displaystyle \left(\frac {dyy}{dxz}\right)$, etc

He also used $c$ for the base of natural logarithms instead of $e$. I think relativitists would prefer that

(Look at his "celstiial mechanics" on Internet Archive)

I like Serena
Homework Helper
I think we should go back to Laplace's notation for partial derivatices. Replace $\displaystyle \frac{\partial y}{\partial x}$ with $\displaystyle \left( \frac{dy}{dx} \right)$.

And higher deriiatives are (obviously!) $\displaystyle \left(\frac {dyy}{dxz}\right)$, etc

He also used $c$ for the base of natural logarithms instead of $e$. I think relativitists would prefer that

(Look at his "celstiial mechanics" on Internet Archive)
Be careful there, or we might get confused with the Legendre symbol.

strangerep
yenchin said:
"...the verdict of major mathematicians like J.Dieudonne is devastating [5]:
“When one gets to the mathematical theories which are at the basis of quantum mechanics, one realizes that the attitude of certain physicists in the handling of these theories truly borders on the delirium. [...] One has to wonder what remains in the mind of a student who has absorbed this unbelievable accumulationof nonsense, a real gibberish! It should be to believe that today’s physicists are only at ease in the vagueness, the obscure and the contradictory."
After reading this I got the mental image that all physicists were high when doing their work xD.
And yet it's the physicists who have discovered vastly more about the world than mathematicians. Indeed, it was an evolutionary advantage for human brains to develop an ability to deemphasize those details which are of less relevance for understanding and predicting real world behaviour.

IOW, physicists are not "high" when doing their work -- quite the opposite. A predator that is "high" when trying understand and anticipate the movements of their prey is likely to starve...

And yet it's the physicists who have discovered vastly more about the world than mathematicians.
Yes, and isn't it odd how carpenters are much more adept at building tables than your average florist?

Danger
Gold Member
Start by turfing anything that I can't express on a manual typewriter.

I think everybody has to admit that physicists are often very informal with math. But regardless of that, it is rather amazing that they still get correct results by applying math that isn't really rigorous. Fair enough, they also get contradictions. But I still am in awe by the fact that rather informal math actually works. For example, the dirac delta function was clearly nonsense when they first used it, but they did get the right results. It's only later that mathematicians found out why.

I think that's the key point. Physicists do things that aren't always justified, but do get the right result. Mathematicians can use these things to develop new math (such as distributions). Without physicists, there would be much less advancements in mathematics.

What I want to say is that physicists and mathematicians shouldn't be throwing mud at eachother. In fact, we should benifit from eachother and work together.

Fredrik
Staff Emeritus
Gold Member
Dirac notation is only useful if they also teach rigged Hilbert spaces. Without that, it's a pretty awful notation. When I read something in Dirac notation, then I always get confused. If I then read the same thing in ordinary math notation, then I understand it immediately.

Furthermore, I think that Dirac notation tends to obfuscate domain issues. So you're more prone to errors.
But sometimes it's really nice. Consider e.g. the proof that if $\rho$ is a projection operator for the 1-dimensional subspace spanned by a unit vector f (written as |f> when we use bra-ket notation), and A is self-adjoint, then $\operatorname{Tr}(\rho A)=\langle f,Af\rangle$.

"Ordinary math notation" (with the convention to have the inner product linear in the second variable):

\begin{align}
\operatorname{Tr}(\rho A) &=\sum_n\langle e_n,\rho A e_n\rangle =\sum_n\left\langle e_n,\langle f,Ae_n\rangle f\right\rangle =\sum_n\langle \langle f,Ae_n\rangle^* e_n,f\rangle =\sum_n\langle \langle Ae_n,f\rangle e_n,f\rangle\\
&=\sum_n\langle \langle e_n,Af\rangle e_n,f\rangle =\langle Af,f\rangle =\langle f,Af\rangle
\end{align}
Bra-ket notation:
\begin{align}
\operatorname{Tr}(\rho A) &=\sum_n\langle n|f\rangle\langle f|A|n\rangle =\sum_n\langle f|A|n\rangle\langle n|f\rangle=\langle f|A|f\rangle.
\end{align}

strangerep
What I want to say is that physicists and mathematicians shouldn't be throwing mud at eachother. In fact, we should benifit from eachother and work together.
My feelings exactly.

"Ordinary math notation" (with the convention to have the inner product linear in the second variable):

\begin{align}
\operatorname{Tr}(\rho A) &=\sum_n\langle e_n,\rho A e_n\rangle =\sum_n\left\langle e_n,\langle f,Ae_n\rangle f\right\rangle =\sum_n\langle \langle f,Ae_n\rangle^* e_n,f\rangle =\sum_n\langle \langle Ae_n,f\rangle e_n,f\rangle\\
&=\sum_n\langle \langle e_n,Af\rangle e_n,f\rangle =\langle Af,f\rangle =\langle f,Af\rangle
\end{align}
Or: expand $f$ to an orthonormal basis $(e_n)_n$. So $e_1=f$. Then
$$Tr(\rho A) = \sum_n \langle e_n,\rho A e_n\rangle = \sum \langle \rho e_n, A e_n\rangle = \langle f,Af\rangle$$

Fredrik
Staff Emeritus
Gold Member
That's a cool trick. I still think that bra-ket notation makes it easier to see some of these things quickly.

The only thing that needs to change is
$$\sin^2(x)$$

This needs to die in a fire
Or even worse... $$\sin^{-1}(x)$$.
The "logic" in going from one of these to the other.... like wow, man. Not going to cause any confusion there....