# A Atiyah's arithmetic physics

1. Sep 24, 2018

### mitchell porter

Sir Michael Atiyah just gave a livestreamed talk claiming to prove the Riemann hypothesis. But it turns out that this is part of a larger research program in which he also claims to have an apriori calculation of the fine-structure constant and possibly other physical constants.

Atiyah is 89. He's still enormously knowledgeable, but various mathematicians are saying that in recent years he has published a number of incorrect mathematical claims, and his faculties are therefore sadly in decline, at least relative to a point in his career where he was making genuine discoveries. Presumably some expert will eventually undertake the melancholy duty of summarizing what Atiyah has been saying mathematically and what's wrong about it. (PF's "General Math" forum already has a thread on today's claimed proof.)

But I thought I would start a thread that is specifically on the physical content of Atiyah's current ideas. One reason is that in the past few years he has coauthored a number of papers with alleged physical content, and while they were clearly speculative, I had not until now imagined that they might contain significant errors, and indeed they may not.

For example, with Manton he wrote a paper in 2016, "Complex Geometry of Nuclei and Atoms", proposing "a new geometrical model of matter, in which neutral atoms are modelled by compact, complex algebraic surfaces". Now, over twenty years ago Atiyah and Manton came up with an instantonic realization of the skyrmion - an old solitonic model of the nucleon - which was subsequently rediscovered in string theory, as part of the Sakai-Sugimoto model of holographic QCD. So one could reasonably wonder whether Atiyah and Manton had after all done it again, and found elegant algebraic-geometric representations of nuclei.

I don't know yet how this thread will work out. It may be difficult to segregate Atiyah's mathematics from his physics. Nonetheless, he has given a name to his physical paradigm - "arithmetic physics" - and I suppose that is what we should try to understand here. The notion is not unique to him. In the higher reaches of mathematics, there is already a refinement of algebraic geometry called arithmetic geometry, and presumably arithmetic physics is an application of arithmetic geometry to physics - it should be that simple.

2. Sep 24, 2018

### Staff: Mentor

3. Sep 24, 2018

### mitchell porter

Thanks for that remark.

Just to further describe the immediate situation, there are two unpublished papers by Atiyah that are now circulating. One contains the "proof" of the Riemann hypothesis, the other contains a "calculation" of the fine-structure constant. The fine-structure constant is said to be a renormalized value of π.

It is a commonplace of quantum field theory that various quantities engage in "renormalization group running". For example, a coupling constant will have a specific value at a certain energy scale, but will have other effective values at other energy scales, owing to quantum corrections. It would seem that Atiyah thinks that the fine-structure constant will be exactly equal to π (or possibly 1/π) at some energy scale, and then it runs to the observed ~1/137 at low energies. Incidentally, I believe there are examples of quantum field theories where a coupling of π naturally appears.

Atiyah has a few equations that supposedly describe this renormalization, but people either don't understand them, or say that they lead to a numerically wrong result. (His "proof of the Riemann hypothesis", incidentally, employs a function that appears in these equations.) I will also remark that ultimately, an algebraic formula for a physical constant needs to be part of a physical theory, or else it is just what is called "numerology". For example, the way that coupling constants run in the standard model, can be deduced from the standard model's equations of motion.

Meanwhile, what is "arithmetic physics" about? Atiyah says this was the title of his speech at last month's International Congress of Mathematicians in Rio de Janeiro, where this year's Fields Medals were awarded, but I have yet to find a video or transcript of the speech. In his "fine-structure constant paper", he also refers to "Manin's vision about a classical bridge between arithmetic and physics". That's the Russian mathematician Yuri Manin. In 1985, Atiyah wrote a "Commentary on the article of Manin", in which he proposes that the Langlands program might supply the math for a quantum version of Manin's arithmetic physics.

I can't find a clear characterization of what the philosophy of arithmetic physics is, so all I can say for now is still just this - that it would be physics which employs the "arithmetic" branches of contemporary mathematics, such as arithmetic geometry and arithmetic topology. To me that sounds like string theory, especially "p-adic string theory", the p-adics being a generalized notion of number which is used a lot in number theory... I am inclined to think that Atiyah may be wrong in detail but right in spirit - that his specific formula for the fine-structure constant is wrong, but that eventually the physics of the real world will be described by this kind of mathematics. However, we shouldn't really believe this until we have calculations that do work, and which are part of a genuine theoretical framework. Until then it's just another idea.

4. Sep 24, 2018

### Auto-Didact

I'm going through his paper "The Fine Structure Constant".

First a more general remark: the paper, to modern eyes, definitely reads more like a popular science book than a physics paper, or mathematics paper which it is claiming to be, which makes it difficult to read. It should be noted that many old papers prior to roughly 1900 have an informal format instead of the modern format we have grown up with and grown accustomed to and many crackpot papers have the same format as well; what is important to keep in mind is that this difference in writing style alone doesn't invalidate any of his claims if they are indeed valid. I digress.

Now I haven't finished the paper yet, but the gist of the philosophy of Atiyah's arithmetic physics seems to be that renormalisation is not merely a mathematical technique for removing infinities from calculations but an actual physical process occuring in a hidden conformal part of reality, with physical quantities such as mass and charge of particles actually literally being numbers which are geometrically picturable as the numbers inside the critical strip of the Riemann zeta function in the complex plane.

Either that or I'm brain-dead without sleep and need more coffee. I'm gonna read on.

5. Sep 24, 2018

### Staff: Mentor

I think the cut is later. I have a books from the 70's (Bartel van der Waerden - orig. 1930/31; Lothar Collatz 1949; Alexander Kurosch 1970) which all are "old-fashioned".

Pre-Bourbaki and Post-Bourbaki

Last edited: Sep 24, 2018
6. Sep 24, 2018

### Auto-Didact

Definitely, I just picked 1900 due to laziness. It of course goes without saying that Atiyah was educated and worked for a long time in that era as well; he certainly wouldn't be the first mathematician who rejected the modern formal ways, seeing Benoit Mandelbrot went so far as to leave mathematical academia and continue doing pure mathematics as an outlaw, by working in physics and many, many other branches of science.
Latin and history in one day, who said scientists aren't cultured?

Last edited: Sep 24, 2018
7. Sep 24, 2018

### Auto-Didact

Atiyah's description of the double limit process in section 8.8 sounds awfully similar to an approximation technique from nonlinear dynamical systems theory called the method of multiple time scales. For those unfamiliar with this technique it is basically a superior method to approximate an exact function and therefore serves as a full replacement of regular perturbation theory. Anyone care to compare?

The coffee machine is broken by the way, I'm about to go find a bed and collapse upon it.

8. Sep 24, 2018

### mitchell porter

Here's what I understand about Atiyah's calculation so far. We are aiming for a number that is approximately 1/137. We focus on the fact that

137 = 1 + 8 + 128 = 2^0 + 2^3 + 2^7

The exponents 0, 3, 7 are three out of the four numbers 0, 1, 3, 7. This is one less than the dimensions of the division algebras R, C, H, O.

In the subject of algebraic geometry, where Atiyah made his mark, there is a phenomenon called Bott periodicity. Certain properties of higher-dimensional objects recapitulate those of lower dimensions. In particular, there is a form of Bott periodicity in which the properties recur (as you increase the dimension) with a cycle of length 8, and in a way suggestive of the division algebras.

Atiyah talks about series that converge on π. He mentions Archimedes, whose method of approximating π was to consider regular polygons inscribed in a circle. The more sides the polygon has, the more closely it approximates the circumference. Then Euler put a new twist on this by interpreting the circle in question as the unit circle in the complex plane.

In any case, mathematics contains series which converge on π. Atiyah introduces a new constant, ж, which is his renormalized "π". This also has a series expansion, and Atiyah says it will converge on 137.0136..., the reciprocal of the fine-structure constant.

But what series? And where does it come from? That part is almost completely opaque to me, so far. However... One manifestation of Bott periodicity is in the immersion of n-spheres in certain infinite-dimensional spaces. Homotopy theory gives an algebraic characterization of the topologically distinct ways in which the n-spheres can be embedded in the space.

I think that Atiyah's sum involves something like: adding numerical characters associated with the n-sphere homotopy groups for a particular infinite-dimensional space. The 137 in 137.0136... arises as above, in the first iteration of Bott periodicity, and then the fractional part is going to come somehow as a correction, arising from the subsequent iterations (i.e. the contributions associated with n-spheres for n>7).

This series for ж is somehow analogous to the Archimedes-Euler series for π. And the "space" with which it is associated, is the type II1 hyperfinite factor in the von Neumann algebra of observables for a quantum field theory. So Atiyah is proposing that (one over) the fine-structure constant is actually a new mathematical constant, analogous to π, and universally present in QFT.

That's what I have so far.

Now let me give a few reasons why all this is very problematic. First of all, the analogy between π and ж appears to be nothing like the relationship between a bare constant and its renormalized value, in physics. Second, why does 2^1 not appear in the sum producing the integer part of 137.0136...? Third, we need a clearer explanation of ж's alleged role in the theory of algebras of observables, and then why or how it is also a special value of the electromagnetic coupling constant.

I see no reason to think that this is going to work out. In the world of physics numerology, sometimes people just propose a formula and leave it at that, or they will try to explain the formula in a contrived way that doesn't really make sense. I have to say, this looks like the latter case - when done by a Fields Medalist. The appeal to Bott periodicity is ingenious, even elegant, but it still looks doomed.

Let me also say something about how orthodox physics accounts for the value of the electromagnetic coupling constant. Well, at the level of experimentally validated theories, it simply doesn't. But in the grand unified theories, all the gauge couplings descend from a single gauge coupling at high energies. And then we want something like string theory to provide a geometric explanation for the high-energy values. Numerically, the high-energy value of the unified gauge coupling has often been treated as about 1/24 or 1/25; and there is precisely one string theory paper known to me, which then gives a mechanism for how the unified coupling could take such a value. That paper has by no means swept the world, and furthermore it's part of that orthodoxy of grand unification and supersymmetry, which in all its forms is challenged by the experimental absence of weak-scale superpartners and proton decay. But I mention it to exhibit what a functioning derivation of the fine-structure constant could look like.

Unfortunately, Atiyah's conception does not even seem to fit into the normal notion of a running constant. His "renormalization" is something else entirely.

9. Sep 25, 2018

### Demystifier

Of course, you meant algebraic topology.

10. Sep 25, 2018

### mitchell porter

I found a video of Atiyah's lecture in Rio (delivered last month) - and it provides important extra context, while raising further questions. The Rosetta Stone of the lecture is a "Table of Symbols for Abel Lecture" which first appears at 16 seconds, and then intermittently later on.

He posits some connection between Type I, II, and III factors of von Neumann algebras, and real/complex numbers (which are both associated with the Type I factor), quaternions (associated with Type II), and octonions (associated with Type III). He also says that Euler's equation e2πi=1 has analogues for quaternions and octonions, in which π is replaced by new constants (and, presumably, imaginary i is replaced by a quaternionic or octonionic quantity). The quaternionic Euler equation is e2жw=1, and the new quantity ж, as already mentioned, is supposed to be 1/α.

The Euler-Mascheroni constant γ is another "Type I" mathematical constant which has analogues at the higher levels; but e is always just e. We are also told that c (speed of light), h (Planck's constant), q(e) (charge of the electron), and G' (dimensionless gravitational constant) are level-I physical quantities, whereas "m0(e)" is level II and "m(e)" is level III, but their meaning and significance is not explained.

"Arithmetic physics" is first introduced in the 30th minute, when he briefly leaps ahead to his slide 12, where von Neumann is shown as the guru of arithmetic physics, and we are shown that modular forms and lattices are involved, along with von Neumann algebras. At 34 minutes, we are told that level I math is classical, level II is quantum. Later he works through his slides in order, so we see progress in mathematics and physics, from antiquity through to the age of algebraic geometry and algebraic topology (50 minutes forward), then "unification" as exemplified by Gel'fand, Langlands and Penrose, the "octonionic future" is foreshadowed by Witten and M-theory, and then we finally return to von Neumann and arithmetic physics.

So there's a big picture even beyond what we have heard so far, it's interesting but also crazy, and there's a level III which Atiyah has not yet talked about at all.

11. Sep 25, 2018

### Auto-Didact

I've also finished reading and rereading this paper, and also watching the talk he gave this morning. I feel the need to have a look at the second paper regarding the RH to let things sink in a bit more.

The shear background one needs to actually be able to actually tackle everything in this paper seriously, not juvenilely as the internet and most mathematicians seem to be doing, is literally staggering. I would realy like to hear what Penrose and 't Hooft have to say about it. Luckily for us mortals, we can at least try to understand bits and pieces of it, and hopefully piece things together by working on together on different aspects.

It seems that this thing is really best left in the hands of physicists than in the hands of the mathematicians... the difference in general attitudes and cultures between these two is remarkable and never ceases to amaze.
The explicit series is explained in section 8, specifically 8.1 through 8.6, while the actual explicit function is given in 8.11, which is exactly the double limit I referred to in my previous post; I agree though that the presentation of the series is a bit opaque, but having reread it a second time certainly helps, especially after having listened to the talk with slides.

His infinite series is, in contrast to the more familiar infinite sums and infinite products, an infinite exponentiation, i.e. $2^2^2^2^...$. I've definitely seen iterated exponents before but I am simply not that familiar with infinitely iterated exponents and under what conditions and circumstances they can be said to converge or not; the question as a physicist is, has anyone? I definitely wouldn't put it past mathematicians to already have dabbled in these matters for this is very much a natural generalization. Incidentally, it also seems to me that the theory of multiplicative calculus (opposed to standard (additive) calculus) may perhaps be enlightening in this respect; perhaps there is even another natural generalization, exponential calculus?

As for these equations clearly being iterated maps, this immediately takes me back to dynamical systems and so to bifurcation theory.
Perhaps.
I would say that he has introduced a new function, or more generally a map, which when correctly evaluated for $\pi$ produces $\alpha$ and when evaluated correctly for other numbers would produce other coupling constants.

What the mathematical properties of this map are however seems to be unclear within the currently accepted framework of mathematics. This shouldn't be too worrisome, for this has occurred several times in the past before, where the mathematical establishment became too entrenched in the reigning orthodoxy; remember not just discontinuous functions and complex numbers, but even square roots were once outlawed by the mathematical establishment, until some rogue genius came along and made the entire enterprise in hindsight look like a bunch of hardheaded fools.

Luckily, as far as physics goes, in stark contrast to contemporary mathematical practice, that doesn't make any difference whatsoever as long as the theory is capable of producing predictions. I don't think I have to remind anyone here how only relatively recently mathematicians complained that renormalization wasn't a mathematically justifiable procedure nor how the Dirac delta function wasn't a function; we clearly see that the physicists were right to flat out ignore the rebukes from the mathematicians in these cases.
1) Agreed, but I will have to mull this over a bit more.
2) There is an analogous historical precedent regarding a sequence of numbers derivable from Ramanujan's $\tau$-function, which later, through the work of Ian McDonald, turned out to be a deep connection between modular forms and properties of affine root systems of the classical Lie algebras, with one of the numbers in the sequence noticeably absent! In other words, given that the thing can give correct results, one missing number seems like nothing more than a red herring to me.
3) I wouldn't put too much focus on this particular aspect, based on the generality of the arguments given, i.e. it seems pretty clear to me that this theory is not particularly focused on QED, i.e. it should explain the coupling constants for all the forces not simply the electromagnetic case. Especially interesting is the implication of the Type III factor of the von Neumann algebra for the gravitational case; does this imply a connection between this algebra and non-renormalizability?
This doesn't seem to be pure numerology for multiple reasons, importantly that the techniques (renormalization, multiple scale analysis, iterated maps) he is utilizing to end up uncovering $\alpha$, which happens to be a dimensionless group, are routinely used to also study other dimensionless groups and related topics in dynamical systems theory. See this thread and this post for an example; would one also call that doing mere numerology?

Atiyah also clearly discusses what numerology is in this very paper before turning over to the use of numerical methods i.e. numerics; the difference is subtle but essential for it is literally the same difference between doing astrology and doing astronomy.

Moreover, it seems that this is genuinely a new kind of proposal going beyond known mathematics, instead of work done from inside the framework using only tools that are already known. If Atiyah was merely doing that, like other more mortal mathematicians do frequently, then I would dismiss it, just as I dismiss those other proposals claiming to having solved the Riemann Hypothesis.

Actually I see another very subtle reason for thinking it may in the end work out, namely that sometimes gut instinct can actually turn out to be right, this definitely wouldn't be the first time something like this has happened; the odds of gut instinct turning out to be right increases exponentially with years of experience especially if that person is of Atiyah's caliber, but of course there is also a counter term at work here depending among other things upon very high age.

There is also another reason, but I will get to that after addressing the following points:
That all goes without saying, especially the part on the current ideas being experimentally challenged to put things mildly. But what this actually signifies is a need for new ideas, not a rehash of old ones. For a more academically based argument why we should not be rehashing old methods, I refer you to another thread about a recent proposal by Lucien Hardy's to employ a constructive methodology for tackling open fundamental problems in theoretical physics; you should in particular have a gander at my post in that thread.
It goes without saying that renormalization plays a big role in physics for renormalizable QFTs such as QED, but surely you (and others) recognize renormalization theory is a much broader topic in mathematics ranging well beyond QFT, instead connected to the existence of universality classes for second order phase transitions and critical phenomena? Atiyah's treatment of renormalization doesn't seem to differ significantly from how renormalization is carried out routinely in the study of bifurcation theory.

12. Sep 25, 2018

### mitchell porter

Atiyah tells us (section 2) that the key to his construction is Hirzebruch formalism applied to von Neumann algebras. Hirzebruch's formalism should be somewhere in this book (that's the whole text), and then it's somehow applied to the process that "converts" type I algebras to type II. It shouldn't be long before some mathematician who already knows both these topics, clarifies for the rest of us what this could mean and whether it makes sense.

A few more remarks about this business of type I, II, III. That is a classification of factors of von Neumann algebras. I'm calling Atiyah's conception, levels I, II, III, because he wants to associate the algebraic types with some other concepts. In particular, level I is associated with things that are commutative and associative (real and complex numbers), level II is noncommutative (quaternions), level III is nonassociative (octonions). He also (section 9.4) thereby associates level I with electroweak, level II with strong force, level III with gravity. (By the way, other people have connected quantum gravity with nonassociativity, so that's not new.)

@Auto-Didact, you bring up dynamical systems theory. Vladimir Manasson (e.g. eqn 11 here) discovered that 1/α ~ (2π)δ2, where δ is Feigenbaum's constant! This is the only way I can imagine Atiyah's calculation actually being based in reality - if it really does connect with bifurcation theory.

13. Sep 25, 2018

### ohwilleke

I am quite skeptical of any effort to derive the fine structure constant by a means independent of the weak force coupling constant, as the two are intimately and functionally related in electroweak theory. A "pleasing" numerology ought to simultaneous give you both, or any other pair of constants from which you can derive those two constants (for example, the ratio of the W and Z boson masses could substitute for the weak force coupling constant).

Also, given the great precision with which the fine structure constant is known, and the fact that any result will necessarily be a post-diction, any formula that provides less than a perfect match to within the margins of experimental error or very nearly so, isn't really worth considering, standing alone.

What is that value (via the Particle Data Group)?

1/137.035 999 139(31)

So, a value of 1/137.0136... doesn't cut it even though it is accurate to nine significant digits.

14. Sep 25, 2018

### mitchell porter

That is my mistake, I was just trying to quote the measured value and inserted a spurious digit. So far no-one even understands how Atiyah intends to get any closer than 137 = 1 + 8 + 128.

As was already mentioned, the orthodox view of the coupling constants is that they should take a simple value at high energies, like "1/24", and then the measured values should be "1/24 + corrections", where the corrections are deduced from the equations of motion, and are something complicated with a logarithmic dependence on that high energy scale.

However, I don't entirely rule out that ~1/137 has a simple origin. A QFT can contain an infrared fixed point, in which the running of couplings converges on a simple value. (And if you find the Koide formula convincing, that's also evidence.) I like Manasson's formula in that case, because it employs a constant (Feigenbaum's), that genuinely shows up in critical phenomena.

As for whether one should expect successful fine-structure numerology to also tell you the weak coupling constant... that's not so clear. If 1/137 is a deep infrared phenomenon, it might be genuinely independent of anything that happens above the Fermi scale, such as electroweak unification. Or maybe there is a second, associated formula.

Sean Carroll just blogged about some of these issues: "Atiyah and the Fine-Structure Constant".

15. Sep 26, 2018

### Demystifier

That's really great. Perhaps Atiyah has done something important about the Riemann conjecture (I cannot tell), but I am convinced that his work on the fine-structure constant is, from the physical point of view, a total nonsense.

16. Sep 26, 2018

### lpetrich

Even if one confines oneself to the Standard Model, it is evident that the fine structure constant is not fundamental. At the mass scale of the W particle, its effective value is around 1/128 rather than its familiar value, around 1/137.036 (Current advances: The fine-structure constant). That value is the for its zero-energy / zero-momentum limit. Furthermore, the electromagnetic interaction emerges from the SU(2) and the U(1) parts of the electroweak gauge interactions, and the elementary electric charge does likewise, emerging from these parts' coupling parameters.

17. Sep 26, 2018

### lpetrich

I went to Particle Data Group - 2018 Review and I got some idea of how much precision that one has for the Standard Model's parameters.
• The fine-structure constant (zero energy/momentum): 0.23 ppb
• The charged leptons' masses (on the mass shell): e 6.2 ppb, mu 23 ppb, tau 67 ppm
• The quarks' masses (u, d, s at 2 GeV, c, b, (?) t on-shell): u 0.20, d 0.085, s 0.063, c 0.024, b 0.0084, t 0.0023
• Quark-mixing matrix elements: 32 to 740 ppm (absolute)
• Neutrino masses and mixing angles are very imprecise
• Weak-interaction coupling constant (Fermi, low-energy): 510 ppb
• Weak-interaction mixing angle (Weinberg, low-energy): 170 ppm
• W, Z, Higgs masses(on-shell): 150 ppm, 23 ppm, 1300 ppm
• QCD coupling constant (m_Z): 0.0093
It's surprisingly good, and one can get several of the Standard Model's parameters to within 1% or less. Extrapolating up to GUT energies with the MSSM, one can get gauge unification to around the experimental precision, meaning that the GUT-scale coupling constant is determined to within 1% or so.

18. Sep 26, 2018

### ohwilleke

The neutrino constants aren't all that bad. They rival quark mass accuracy and the accuracy of the QCD coupling constant (the low accuracy of which is one of the main reasons that quark mass determinations are so inaccurate).

It is also worth noting that while percentage accuracy is useful for many purposes, in other applications, the absolute magnitude of the uncertainty matters more, and on that score, the dominant uncertainty in the SM physical constants is the top quark mass, and the uncertainties in the absolute neutrino mass constants are tiny.

There are four parameters of the PMNS matrix; three of which are known with moderate accuracy. The portion of error in these three parameters is:

theta12=0.0238,
theta23=0.0525
theta13=0.052

The Dirac CP phase is constrained within ~15% (~9%) uncertainty in NO (IO) around nearly-maximal CP-violating values; the CP violating parameter of the PMNS excludes no CP violation at two sigma.

The uncertainty in the difference between the first and second neutrino mass eigenstate is roughly 0.014, and the difference between the second and third neutrino mass eigenstate is roughly 0.01, which implies that the sum of the three neutrino mass eigenstates cannot be less than about 65.34 meV with 95% confidence.

Astronomy data can now credibly support a 0.091 eV upper limit on the sum of the three active neutrino masses at a 95% confidence level (i.e. 2 sigma). The "normal" neutrino mass hierarchy is now favored over the "inverted" neutrino mass hierarchy at the 3.5 sigma level by existing available data.

Sum of all three neutrino masses should be in the range: 65.34-91 meV.

The range of the three absolute neutrino masses that would be consistent with experimental data is approximately as follows (with the location of each mass within the range being highly correlated with the other two and the sum):

Mv1 0-7.6 meV
Mv2 8.42-16.1 meV
Mv3 56.92-66.2 meV

Thus, we know the absolute values of the second and third neutrino mass eigenvalues, and the sum of the three neutrino masses, with close to the same precision as we know the up and down quark masses.

Neff equal to 3.046 in a case with the three Standard Model neutrinos and neutrinos with masses of 10 eV or more not counting in the calculation. As of 2015, the constraint with Planck data and other data sets was 3.04 ± 0.18 (even in 2014 cosmology ruled out sterile neutrinos). The four neutrino case is ruled out at a more than 5.3 sigma level already, which is a threshold for a scientific discovery that there are indeed only three neutrinos with masses of 10 eV or less.

The exclusion of more than three active neutrinos from weak boson decays is far more stringent that the Neff constraints from cosmology.

The minimum half-life of neutrinoless double beta decay is 5.3⋅1025 years at 90 % C.L., by comparison the age of the universe is roughly 1.4*109 years old.

Last edited: Sep 30, 2018
19. Sep 26, 2018

### ohwilleke

Last edited: Sep 30, 2018
20. Sep 26, 2018

### Copernicuson

I wish they would call it inverse fine structure constant or Sommerfeld fine structure