Help We have forgotten how to write math stuff

  • Thread starter micromass
  • Start date
In summary: Einstein notation is beautiful, but Dirac notation is not as aesthetically pleasing to me. Dirac notation is more difficult to read because it uses different symbols for the same thing. Dirac notation is also harder to understand because it uses different symbols for the same thing.
  • #36
micromass said:
The point is that he wrote dx before the function instead of after it.

Ah, yes, that might be right. I though he was simply integrating the whole thing over 1, and the (stuff) was just an afterthought, not the function. Kind of like "I hate this so and so 'stuff'":tongue:
 
Physics news on Phys.org
  • #37
The plus sign ("+") looks too much like the letter "t". So... I don't know... make it, like, a squiggle or something.
 
  • #38
Have you noticed that sometimes [itex]cos^2(x)[/itex] is used for [itex]cos(x)^2[/itex] while the first should mean cos(cos(x)) and the second cos(x)*cos(x). I thought maybe it was because the first notation could spare a parenthesis ?
 
Last edited:
  • #39
jk22 said:
Have you noticed that sometimes cos^2(x) is used for cos(x)^2 while the first should mean cos(cos(x)) and the second cos(x)*cos(x). I thought maybe it was because the first notation could spare a parenthesis ?

I still wonder what ##\sin^{-2}x## means. Is it ##(\arcsin x)^2##, or is it ##(\csc x)^2##?I'd like to get rid of all the ambiguities in math notation.
So I also have problems with the fact

... that ##\text{sinc }x## can mean either ##\frac {\sin x} x## or ## \frac {\sin(\pi x)} {\pi x}##.

... that Fourier Transforms are not properly standardized.

... that the meaning of ##\theta## and ##\phi## in spherical coordinates is not properly standardized.
 
  • #40
FlexGunship said:
The plus sign ("+") looks too much like the letter "t". So... I don't know... make it, like, a squiggle or something.

I think we're OK with "+,-,*, and /", let's not go overboard here. I don't want to start unlearning stuff I learned in preschool at my age!:frown:
 
  • #41
DiracPool said:
Ah, yes, that might be right. I though he was simply integrating the whole thing over 1, and the (stuff) was just an afterthought, not the function. Kind of like "I hate this so and so 'stuff'":tongue:

An integration is actually a summation.
Didn't we learn in pre-school that multiplication has a higher priority than summation?
I prefer not to unlearn that. :wink:
 
  • #42
I hate the annoying anti-symmetrization brackets used for things like wedge product and exterior derivatives. Like honestly, who ever thought ##\nabla_{[e}\omega_{a_1...a_n]}## was better than ##d\omega##, not to mention it is quite cumbersome during proofs.
 
  • #43
I like Serena said:
An integration is actually a summation.

Actually, it's the converse. A summation is integration wrt a special measure.
 
  • #44
I like Serena said:
I still wonder what ##\sin^{-2}x## means. Is it ##(\arcsin x)^2##, or is it ##(\csc x)^2##?

@_@ Have a heart ILS! geez. I tend to go for the latter tho.
 
  • #45
so Micro what's the motivation behind your question?

Are you designing a better math?

or writing a post-apocalyptic sci-fi novel?
 
  • #46
WannabeNewton said:
I don't know how mathematicians feel about dirac notation but Einstein notation doesn't seem to be too rare amongst the mathematicians.
I know at least one (applied) mathematician who dislikes both, and prefers to write a Dirac braket product as something like ##\phi^T \psi##. Personally, I find both notations to be useful in different situations. Being multilingual is usually beneficial.

Re Einstein index notation, I also like Penrose's generalization to "abstract index notation". It looks a lot like the usual Einstein notation, but its meaning generalizes to infinite-dimensional spaces.
 
  • #47
Office_Shredder said:
The only thing that needs to change is
[tex] \sin^2(x) [/tex]

This needs to die in a fire
I don't have a problem with this at all.

jk22 said:
Have you noticed that sometimes [itex]cos^2(x)[/itex] is used for [itex]cos(x)^2[/itex] while the first should mean cos(cos(x)) and the second cos(x)*cos(x).
Why should the first one refer to a composition? If you're working with a set of functions on which both products and compositions are defined, what notation would you use for the product of f and g? Wouldn't you use ##fg##? In that case, the ##f^2## notation is very natural too.

This also explains ##\sin^2(x)##.
 
  • #48
WannabeNewton said:
I don't know how mathematicians feel about dirac notation but Einstein notation doesn't seem to be too rare amongst the mathematicians. Lee for example uses it in both his smooth and Riemannian manifolds texts.

I recommend reading the very nice article: "Mathematical Surprises and Dirac’s Formalism in Quantum Mechanics", F. Gieres, Rep.Prog.Phys. 63 (2000) 1893, arXiv:quant-ph/9907069v2.

"We discuss the problems and shortcomings of this formalism as well as those of the bra and ket notation introduced by Dirac in this context. In conclusion, we indicate how all of these problems can be solved or at least avoided."

"...the verdict of major mathematicians like J.Dieudonne is devastating [5]: “When one gets to the mathematical theories which are at the basis of quantum mechanics, one realizes that the attitude of certain physicists in the handling of these theories truly borders on the delirium. [...] One has to wonder what remains in the mind of a student who has absorbed this unbelievable accumulation
of nonsense, a real gibberish! It should be to believe that today’s physicists are only at
ease in the vagueness, the obscure and the contradictory."
 
  • #49
Nothing in mathematics annoys me as much as the physicist's description of a tensor as "something that transforms as (blah-blah-blah)". I don't think I even want to talk about it. I get angry just thinking about it. So I'll just mention some mildly irritating things.

I don't like it when people write something like f(x) and refer to it as a "function". It's not. f is the function. f(x) is an element of its codomain. So f(x) is usually a number.

"Find the derivative of x sin ax." The correct answer is: "The derivative of a number is not defined, you idiot". But I doubt that you will get the maximum number of points if you write this on an exam.

I also don't like when people write ##\frac{d}{dx}f## or ##\frac{d}{dt}f## for the derivative of a function f. Either use a notation like Df, that doesn't have an irrelevant variable symbol in the part that tells you to take the derivative of something, or write ##\frac{d}{dx}f(x)##.

The latter notation is perfectly fine, because
$$\frac{d}{dx}(x\sin ax)$$ is read as "the value at x of the derivative of the function ##t\mapsto t\sin at##". Of course when I explain that to someone, they always ask what t is. **Facepalm**

I also don't like that people don't use the simple notation ##(AB)_{ij}=\sum_{k=1}^n A_{ik}B_{kj}## in the definition of matrix multiplication. Most physics students who see this don't even recognize this as the definition.

I don't like the term "functions of many variables". It makes sense when we talk informally about how a statement like ##x+y+z## makes z "a function of x and y", but it's inappropriate when we use the actual definition of "function". A function ##f:\mathbb R^3\to\mathbb R## isn't a function "of many variables". It takes one element of the domain as input, and that element can be represented by one variable.
 
  • #50
yenchin said:
When one gets to the mathematical theories which are at the basis of quantum mechanics, one realizes that the attitude of certain physicists in the handling of these theories truly borders on the delirium. [...] One has to wonder what remains in the mind of a student who has absorbed this unbelievable accumulation of nonsense, a real gibberish!
I love that quote. :smile:
 
  • #51
strangerep said:
Re Einstein index notation, I also like Penrose's generalization to "abstract index notation". It looks a lot like the usual Einstein notation, but its meaning generalizes to infinite-dimensional spaces.
Yeah Wald uses the notation throughout his text so I've grown rather fond of the abstract index notation whilst working through the text.

Recently, Ben Niehoff recommended to me a text on classical gauge fields (Rubakov) which I did manage to get my hands on and in it Einstein notation is used in a way that makes my blood boil o:) In particular the author writes, for example, ##a_{i}b_{i}## instead of ##a_{i}b^{i}## when implying summation. It is quite infuriating lol.

yenchin said:
"...the verdict of major mathematicians like J.Dieudonne is devastating [5]: “When one gets to the mathematical theories which are at the basis of quantum mechanics, one realizes that the attitude of certain physicists in the handling of these theories truly borders on the delirium. [...] One has to wonder what remains in the mind of a student who has absorbed this unbelievable accumulation
of nonsense, a real gibberish! It should be to believe that today’s physicists are only at
ease in the vagueness, the obscure and the contradictory."
After reading this I got the mental image that all physicists were high when doing their work xD.
 
  • #52
HeLiXe said:
@_@ Have a heart ILS! geez. I tend to go for the latter tho.

Hey LiXe! :wink:

Actually, there is also a 3rd possibility.
If we follow the rules of algebra with function composition ##\sin^{-2}x## should be ##\arcsin(\arcsin x)##. o_O
 
  • #53
WannabeNewton said:
Recently, Ben Niehoff recommended to me a text on classical gauge fields (Rubakov) which I did manage to get my hands on and in it Einstein notation is used in a way that makes my blood boil o:) In particular the author writes, for example, ##a_{i}b_{i}## instead of ##a_{i}b^{i}## when implying summation. It is quite infuriating lol.
I actually prefer the "everything downstairs" notation when we're just doing matrix multiplication (e.g. when we're dealing with Lorentz transformations in SR). For example, if A and B are square matrices, there's no reason to dislike the notation
$$\operatorname{Tr}(A^TB)=(A^TB)_{jj}=(A^T)_{ji}B_{ij}=A_{ij}B_{ij}.$$ What makes Rubakov's notation weird is that when he writes ##F_{\mu\nu}F_{\mu\nu}##, he doesn't mean ##\operatorname{Tr}(F^TF)## (where F is the matrix with components ##F_{\mu\nu}##), he means ##\operatorname{Tr}(F^T\eta F)##. That's what's messed up, not that all the indices are downstairs. Edit: See my next post for a correction.
 
Last edited:
  • #54
Fredrik said:
I actually prefer the "everything downstairs" notation when we're just doing matrix multiplication (e.g. when we're dealing with Lorentz transformations in SR). For example, if A and B are square matrices, there's no reason to dislike the notation
$$\operatorname{Tr}(A^TB)=(A^TB)_{jj}=(A^T)_{ji}B_{ij}=A_{ij}B_{ij}.$$ What makes Rubakov's notation weird is that when he writes ##F_{\mu\nu}F_{\mu\nu}##, he doesn't mean ##\operatorname{Tr}(F^TF)## (where F is the matrix with components ##F_{\mu\nu}##), he means ##\operatorname{Tr}(F^T\eta F)##. That's what's messed up, not that all the indices are downstairs.
I agree that for matrices it certainly is a perfectly fine way to write it but I didn't realize that's what he meant with his notation. That's quite evil haha
 
  • #55
Looks like I was a bit careless. He says that ##F_{\mu\nu}F_{\mu\nu}## denotes what we'd normally write as ##F_{\mu\nu}F^{\mu\nu}##. This is equal to ##\operatorname{Tr}(F^T\eta^{-1}F\eta^{-1})## if ##F## denotes the matrix with components ##F_{\mu\nu}##.
 
  • #56
I think we should go back to Laplace's notation for partial derivatices. Replace ##\displaystyle \frac{\partial y}{\partial x}## with ##\displaystyle \left( \frac{dy}{dx} \right)##.

And higher deriiatives are (obviously!) ##\displaystyle \left(\frac {dyy}{dxz}\right)##, etc

He also used ##c## for the base of natural logarithms instead of ##e##. I think relativitists would prefer that :devil:

(Look at his "celstiial mechanics" on Internet Archive)
 
  • #57
AlephZero said:
I think we should go back to Laplace's notation for partial derivatices. Replace ##\displaystyle \frac{\partial y}{\partial x}## with ##\displaystyle \left( \frac{dy}{dx} \right)##.

And higher deriiatives are (obviously!) ##\displaystyle \left(\frac {dyy}{dxz}\right)##, etc

He also used ##c## for the base of natural logarithms instead of ##e##. I think relativitists would prefer that :devil:

(Look at his "celstiial mechanics" on Internet Archive)

Be careful there, or we might get confused with the Legendre symbol.
 
  • #59
WannabeNewton said:
yenchin said:
"...the verdict of major mathematicians like J.Dieudonne is devastating [5]:
“When one gets to the mathematical theories which are at the basis of quantum mechanics, one realizes that the attitude of certain physicists in the handling of these theories truly borders on the delirium. [...] One has to wonder what remains in the mind of a student who has absorbed this unbelievable accumulationof nonsense, a real gibberish! It should be to believe that today’s physicists are only at ease in the vagueness, the obscure and the contradictory."
After reading this I got the mental image that all physicists were high when doing their work xD.
And yet it's the physicists who have discovered vastly more about the world than mathematicians. Indeed, it was an evolutionary advantage for human brains to develop an ability to deemphasize those details which are of less relevance for understanding and predicting real world behaviour.

IOW, physicists are not "high" when doing their work -- quite the opposite. A predator that is "high" when trying understand and anticipate the movements of their prey is likely to starve...
 
  • #60
strangerep said:
And yet it's the physicists who have discovered vastly more about the world than mathematicians.

Yes, and isn't it odd how carpenters are much more adept at building tables than your average florist? :rolleyes:
 
  • #61
Start by turfing anything that I can't express on a manual typewriter.
 
  • #62
I think everybody has to admit that physicists are often very informal with math. But regardless of that, it is rather amazing that they still get correct results by applying math that isn't really rigorous. Fair enough, they also get contradictions. But I still am in awe by the fact that rather informal math actually works. For example, the dirac delta function was clearly nonsense when they first used it, but they did get the right results. It's only later that mathematicians found out why.

I think that's the key point. Physicists do things that aren't always justified, but do get the right result. Mathematicians can use these things to develop new math (such as distributions). Without physicists, there would be much less advancements in mathematics.

What I want to say is that physicists and mathematicians shouldn't be throwing mud at each other. In fact, we should benifit from each other and work together.
 
  • #63
micromass said:
Dirac notation is only useful if they also teach rigged Hilbert spaces. Without that, it's a pretty awful notation. When I read something in Dirac notation, then I always get confused. If I then read the same thing in ordinary math notation, then I understand it immediately.

Furthermore, I think that Dirac notation tends to obfuscate domain issues. So you're more prone to errors.
But sometimes it's really nice. Consider e.g. the proof that if ##\rho## is a projection operator for the 1-dimensional subspace spanned by a unit vector f (written as |f> when we use bra-ket notation), and A is self-adjoint, then ##\operatorname{Tr}(\rho A)=\langle f,Af\rangle##.

"Ordinary math notation" (with the convention to have the inner product linear in the second variable):

\begin{align}
\operatorname{Tr}(\rho A) &=\sum_n\langle e_n,\rho A e_n\rangle =\sum_n\left\langle e_n,\langle f,Ae_n\rangle f\right\rangle =\sum_n\langle \langle f,Ae_n\rangle^* e_n,f\rangle =\sum_n\langle \langle Ae_n,f\rangle e_n,f\rangle\\
&=\sum_n\langle \langle e_n,Af\rangle e_n,f\rangle =\langle Af,f\rangle =\langle f,Af\rangle
\end{align}
Bra-ket notation:
\begin{align}
\operatorname{Tr}(\rho A) &=\sum_n\langle n|f\rangle\langle f|A|n\rangle =\sum_n\langle f|A|n\rangle\langle n|f\rangle=\langle f|A|f\rangle.
\end{align}
 
  • #64
micromass said:
What I want to say is that physicists and mathematicians shouldn't be throwing mud at each other. In fact, we should benifit from each other and work together.
My feelings exactly.
 
  • #65
Fredrik said:
"Ordinary math notation" (with the convention to have the inner product linear in the second variable):

\begin{align}
\operatorname{Tr}(\rho A) &=\sum_n\langle e_n,\rho A e_n\rangle =\sum_n\left\langle e_n,\langle f,Ae_n\rangle f\right\rangle =\sum_n\langle \langle f,Ae_n\rangle^* e_n,f\rangle =\sum_n\langle \langle Ae_n,f\rangle e_n,f\rangle\\
&=\sum_n\langle \langle e_n,Af\rangle e_n,f\rangle =\langle Af,f\rangle =\langle f,Af\rangle
\end{align}

Or: expand ##f## to an orthonormal basis ##(e_n)_n##. So ##e_1=f##. Then
[tex]Tr(\rho A) = \sum_n \langle e_n,\rho A e_n\rangle = \sum \langle \rho e_n, A e_n\rangle = \langle f,Af\rangle[/tex]
 
  • #66
That's a cool trick. I still think that bra-ket notation makes it easier to see some of these things quickly.
 
  • #67
Office_Shredder said:
The only thing that needs to change is
[tex] \sin^2(x) [/tex]

This needs to die in a fire

Or even worse... [tex] \sin^{-1}(x) [/tex].
The "logic" in going from one of these to the other... like wow, man. Not going to cause any confusion there...
 

Similar threads

Replies
5
Views
936
Replies
9
Views
976
Replies
15
Views
1K
  • STEM Academic Advising
Replies
26
Views
2K
  • Quantum Interpretations and Foundations
Replies
28
Views
2K
  • General Discussion
Replies
3
Views
1K
Replies
1
Views
934
Replies
6
Views
1K
Replies
17
Views
1K
Replies
67
Views
10K
Back
Top