What are the most important phenomena that the Standard Model can't explain?

In summary: The fact is that QCD calculations are not particularly precise, and this is in large part due to the fact that the SM is an approximate theory.2. Other phenomena that are difficult or impossible to explain in terms of the SM, but might be due to some as-yet-undiscovered aspect of the SM.This is the category for the phenomena known as dark matter and dark energy, for example. There is a fair bit of evidence in favor of their existence, but there is still much that is unknown about them.3. New phenomena that are incompatible with the SM.This is the category for phenomena like the Higgs boson and the
  • #1
AndreasC
Gold Member
545
304
TL;DR Summary
N/A
I guess the crux of the question is, where are we more likely to encounter new physics? The standard model already explains almost every experimental result, but not EVERY result. But what are some of the most important results that are either incompatible with the standard model or just inadequately explained? Usually when there were major breakthroughs in physics, you had a phenomenon or a class of phenomena that couldn't be explained by the old theory. For QM it was the ultraviolet catastrophe, the photoelectric effect etc, for example. Is there any chance at this point that the solution to the impasse physics kinda seems to be at could come in the form of an explanation of some yet unexplained effect?
 
  • Like
Likes ohwilleke
Physics news on Phys.org
  • #2
Good question. I can't think of any clear experimental results that substantially deviate from the standard model, and even checking the unsolved problems in physics page on wikipedia doesn't really help me find anything on this. But I'm no expert on this matter. Hopefully someone else can chime in.
 
  • Like
Likes AndreasC
  • #3
I mean, if there are no results deviating from the model or being left unexplained, it sounds extremely hard to even begin to think about moving beyond from it...
 
  • #4
where are we more likely to encounter new physics? The standard model already explains almost every experimental result, but not EVERY result. But what are some of the most important results that are either incompatible with the standard model or just inadequately explained? . . . Is there any chance at this point that the solution to the impasse physics kinda seems to be at could come in the form of an explanation of some yet unexplained effect?

There are a few subsets of "unexplained" phenomena that deserve mention. I break them for convenience into seven categories, some of which have subcategories.

1. QCD phenomena that the existing SM might be able to explain, but doesn't really explain yet.

Basically, this category is reserved for hadronic physics, i.e. how composite, strong force bound particles behave. These could be entirely due to practical difficulties involved in doing experiments and conducting experiments, but could also be because there are some rules of QCD that the Standard Model omits or doesn't get quite right. There is no area where results that facially show experimental outcomes inconsistent with a Standard Model derived prediction are more common. But since nobody actually does the true Standard Model calculation and instead uses approximations derived from it, it is hard to know where the source of the problem arises sometimes.

a. Early on, we pretty much figured out the entire three quark baryon spectrum and the pseudo-scalar meson spectrums and we can explain their properties pretty well with QCD.

But there are lots of hadrons whose spectrums are not well understood - basically scalar and axial vector mesons some of which are composed of exclusively or as part of mixes of quarkonia, glueballs, tetraquarks, pentaquarks, hexaquarks, unstable meson "molecules", unstable baryon "molecules" (i.e. apart from atomic nuclei), and meson-baryon "molecules".

We can suggest heuristic and qualitative explanations for why we haven't seen free glueballs yet (despite the fact that their properties we among the earliest hadron properties determined from first principles with QCD since at first order the only SM constant needed to describe them is the QCD coupling constant, with quark masses, CKM matrix parameters, the weak force coupling constant, the electromagnetic coupling constant and neutrino properties only figuring in at remote loop levels that can be ignored without meaningful loss of precision). but, we still don't really have a good quantitative way of explaining why we see the overall observed spectrum of scalar and axial vector mesons and pentaquarks that we do, or explaining the relative frequency of "vanilla" baryons and pseudo-scalar mesons on one hand, and other hadrons.

There isn't a terribly good reason to think that the SM won't be able to solve these questions, but so far, they have proven intractable. We have papers purporting to explain these phenomena (with error bars typically on the order of 2%-3%), but they contradict each other.

b. We've known that it should be possible to calculate parton distribution functions (PDFs) from first principles in the Standard Model since the late 1970s and early 1980s when the Standard Model was formulated. But this was first actually managed outside the most idealized circumstances only in May of this year. Until now (and still now in practical applications) we instead determine PDFs based upon simply tallying up billions of data points showing actually observed partons, putting them on charts, and fitting mathematical curves to them phenomenologically based upon how they look with some slight insight gains about how they should look based upon the underlying physics from QCD. But this is still basically an unsolved problem in its infancy.

c. QCD calculations are not very precise, and this isn't just a matter of imprecise measurements.

For example, we have measured the mass of the charged pion to a precision of 1.5 parts per million. But we can only calculate that mass from first principles to a precision of a few parts per thousand.

We understand why this is the case quite well. The strong force coupling constant has been measured only to about one part per thousand. The high numerical value of the dimensionless strong force coupling constant relative to one means that even at five loops you only have about one part per thousand theoretical precision compared to one part per 61 billion theoretical precision in QED calculations to five loops in which the dimensionless QED coupling constant is much smaller relative to one. Photons don't have self-interactions and only have one charge plus polarization (strictly speaking photons themselves are uncharged but they interact with electromagnetic charges that can be either positive or negative in one dimension only), greatly reducing the number of possible Feynman diagrams per loop of calculation and qualitatively reducing the potential complexity involved, while gluons have self-interactions and three colors in addition to polarization requiring far more Feynman diagrams per loop. We can observed free photons and charged particles, while quarks and gluons (apart from top quarks) are confined so we have to infer their properties from composite particles.

But the problem of the slow convergence of the infinite series path integrals we use do to QCD calculations isn't simply a case of not being able to throw enough computing power at it. The deeper problem is that the infinite series whose sum we truncate to quantitatively evaluate path integrals isn't convergent. After about five loops, using current methods, your relative error starts increasing rather than decreasing. From the link in this paragraph:

1596817558222.png

When you can get to parts per 61 billion accuracy at five loops as you can in QED, or even to the roughly one part per 10 million accuracy you can in weak force calculations to five loops, this is a tolerable inconvenience since our theoretical calculations still exceed our capacity to measure the phenomena that precisely. But when you can only get to parts per thousand accuracy at five loops as is the case in perturbative QCD calculations, an inability to get great precision by considering more loops is a huge problem when it comes to making progress.

You can't even measure the fundamental constants with more accuracy due to this problem, because you do that by theoretically reverse engineering the precisely measured observable quantities of composite particles like the charged pion and proton and neutron masses, with QCD theoretical formulations, to figure out the values of the fundamental constants that produce those experimental values in the theory.

Now, this isn't necessarily insurmountable. It might be possible to use math tricks to do better. For example, mathematically speaking, one can resort to techniques along the lines of a Borel transformation to convert a divergent series into a convergent one. And, limitations associated with path integral formulations involve perturbative QCD calculations that can be overcome by using non-perturbative QCD methods like lattice QCD. But getting great precision in QCD calculations is as much a theoretical problem as a practical one.

2. Tensions between experimental measurements and SM predictions.

There are several instances of tensions between experimental measurements and SM predictions.

This isn't new issue, although the specific anomalies have varied over time. But in the past, so far, these tensions have shown a strong tendency to resolve themselves with better theoretical SM prediction calculations and more precise experiments.

For example, a tension between the predicted and theoretically measured sized of the radius of the proton between protons in ordinary hydrogen with an electron "orbiting" it, and in muonic hydrogen with a muon "orbiting" it, were resolved last September by this means (in favor of the muonic hydrogen measurement). Similarly, an anomalous 750 GeV resonance disappeared with later experimental measurements and turned out to be a fluke. And evidence of superluminal neutrino speeds at the OPERA experiment turned out to be due to experimental error.

Some of the existing tensions include the following:

There are some indications of a violation of charged lepton universality that shows up in B meson decays but in almost no other context (with constraints from other contexts being quite strict at the LHC and in charmed meson decays). Multiple experiments are looking at the question right now.

A measurement called muon g-2 (i.e. the anomalous muon magnetic dipole moment) does not quite square with the predicted value. A similar issue exists, but smaller in magnitude and in the opposite direction, for electrons. Most of the theoretical discrepancy is attributable to uncertainty in the QCD calculation which as discussed above, is a dicey business and could easily be wrong in its contribution to such a high precision prediction (e.g. an error in estimating the precision of the QCD contribution to the prediction could make the discrepancy between theory and experiment seem much greater than it is in reality). The errors in theoretical calculation by component are summarized roughly as follows (in comparable units):

QED 0.08 (i.e. 0.2% of the total)
Weak Force 1.00 (i.e. 2.9% of the total)
QCD 33.73 (i.e. 96.9% of the total).

Proportion of total value from each component:

QED 99.994% (116 584 718.95)
Weak Force 0.00013% (153.6)
QCD 0.006% (6931)

Relative error percentage:

QED 0.000 000 0686%
Weak Force 0.65%
QCD 4.88%

Two experiments are currently underway to revisit the experimental measurement of muon g-2 with greater precision.

There is a 2.8 sigma discrepancy between the mean lifetime of a free neutrino measured by two different methods.

There is a similar tension in measurements of one of the elements of the CKM Matrix: "The Vus determinations based on the inclusive branching fraction of τ to strange final states are about 3σ lower than the Vus determination from the CKM matrix unitarity."

The predicted abundance of Lithium-7 in the universe isn't quite what Big Bang Nucleosynthesis predict (a prediction which has as its foundation the basic, lab measurable properties of various kinds of possible nuclear fusion and nuclear fission and atomic decay reactions).

A group of scientists in Hungary claim to have discovered a 17 MeV rest mass boson (dubbed X17) that caused the decay products of helium-4 and a beryllium-8 atoms to be emitted at a non-Standard Model angle, allegedly as the fundamental carrier boson of a new fifth force. The result has high statistical significance, but has not been replicated in the time period since the results were first announced in 2016, and despite the fact that this should have been discernible in earlier experiments.

The strength of particular decay channels of the Higgs boson don't always square exactly with the theoretical prediction of those strengths of a Higgs boson of the mass observed, although overall the fit is quite good, and the margins of error in some of these measurements are significant (which makes the anomalous results insignificant) and these errors could be somewhat underestimated (further reducing the statistical significance of any deviations from the Standard Model prediction). Other properties of the Higgs boson appear to be a perfect match to the Standard Model expectation.

3. Neutrino Physics.

In the original Standard Model, neutrinos had zero rest mass and did not oscillate between masses or weak force generations. But we eventually discovered that there are at least two or three non-zero neutrino mass eigenstates and that the neutrinos oscillate in a manner well described by the four parameter PMNS matrix and the mass differences between the neutrino eigenstates.

This gave the Standard Model seven new experimentally measured physical constants and left us with lots of questions that remain unanswered, because the source of neutrino mass doesn't fit neatly into the narrative that explains the generation of the other fundamental particle masses in the Standard Model. Since massive neutrinos are a recent addition to the Standard Model, some people don't consider them to be part of the Standard Model in the strict sense of the term

There have also been hints in observations of neutrinos produced by nuclear reactors that there may be more than three kinds of neutrino states that are part of the overall oscillation pattern. But the evidence on this point is contradictory and remains unresolved.

There are lots of unsolved problems in neutrino physics, some of which could reveal new physics depending upon how they are resolved. Neutrinos are very challenging to observe experimentally which is one reason for the discrepancy. Some of the big open questions in neutrino physics are:

i. Is there a normal ordering or inverted ordering of neutrino masses?

ii. What is the lightest neutrino rest mass eigenstate?

iii. Is PMNS matrix parameter θ23 a little more than 45º or a little less (with the same magnitude of difference from 45º in either case)?

iv. What is the CP violating phase of the PMNS matrix?

v. Are there non-sphaleron processes, like neutrinoless double beta decay (which hasn't been seen to date in replicated experiments, but wouldn't be distinguishable from backgrounds under the most conventional assumptions about the process if neutrino masses are in the milli-electron volt (meV) range as widely suspected from multiple lines of evidence), that do not conserve lepton number that implicate neutrino physics?

vi. By what means do neutrinos acquire their rest mass? There are multiple competing theories to explain this, none of which has observational support to prefer it at this point.

vii. Do sterile neutrinos that oscillate with ordinary neutrinos, or other right handed neutrinos of some type, exist? The evidence on this question is mixed, with some examinations seemingly ruling it out, and others seeing statistically significant signals although not always consistent with each other.

viii. Do neutrinos have "non-standard interactions" or otherwise violate the SM massive neutrino model? There are some fairly https://www.physicsforums.com/(note%20that%20massive%20neutrinos%20are%20a%20recent%20addition%20to%20the%20Standard%20Model%20and%20some%20people%20dont%20consider%20them%20to%20be%20part%20of%20the%20Standard%20Model%20in%20the%20strict%20sense%20of%20the%20term on such interactions, however.

ix. What is the ratio of neutrinos to antineutrinos in the Universe?

x. Are the properties of ultrahigh energy cosmic ray neutrinos compatible with the Standard Model?

4. Quantum Mechanical Foundations

Essentially all of the phenomena described in the Standard Model are explained at the quantum mechanical level.

When we observe quantum mechanical phenomena there is a "collapse of the wave function" that give the phenomena a specific experimentally measurable value. But, when we don't observe quantum mechanical phenomena it behaves in ways that are inconsistent with a single, deterministic, "real", local, causal particle theory. These foundations have a couple of aspects that are particularly vexing.

One is called the "measurement problem" in quantum mechanics is that existing theory is less precise than it might be about what precisely constitutes a measurement that triggers a collapse of the wave function, and whether the description of what we see as a collapse of the wave function is a correct way of understanding what we observed.

Another is related to quantum entanglement. In this phenomena, there are hints of either non-locality, or causation violations (i.e. with information moving backwards in time to some extent). Understanding this is an ongoing effort.

5. Why do the Standard Model parameters take the values that they do?

The Standard Model has lots of parameters that can only be determined experimentally. They cannot be determined from theory from other constants or mathematical reasoning.

In the Standard Model, there are fifteen mass parameters (arguably more accurately described as Yukawa coupling constants of the Higgs boson).

There are four parameters related to the CKM matrix that explains the probability of a quark turning into another kind of quark in a W boson interaction.

There are four parameters related to the PMNS matrix that explains neutrino oscillation.

There are three fundamental force coupling constants (strong, weak, electromagnetic a.k.a. SU(3), SU(2) and U(1)).

A few of these are related to each other in a manner that has fewer degrees of freedom than there are constants as a result of electroweak unification relationships between them. For example, the Z boson mass can be determined from the W boson mass, the U(1) coupling constant and the SU(2) coupling constant in the Standard Model.

There are observable patterns in the values of those parameters that don't have a Standard Model explanation. For example:

* The masses of the charged leptons conform to Koide's Rule.
* Generalizations of Koide's rule provide reasonable (but not perfect) approximations of the masses of the quarks and of the neutrinos, suggesting that this line of reasoning could be on the right track. There are also other hypotheses that have been proposed to explain the relative masses of these particles.
* The sum of the square of the masses of the fundamental fermions of the Standard Model combined with the sum of the square of the masses of the fundamental bosons of the Standard Model is equal almost precisely to the square of the Higg field vacuum expectation value (but the sum of the square of the masses of the fundamental bosons is greater than the sum of the square of the masses of the fundamental fermions at a highly statistically significant level).
* The mass of the Higgs boson differs from the sum of two times the mass of the W boson plus the mass of the Z boson plus the (zero) mass of the photon by an amount potentially explainable by the kind of adjustments made to more fundamental particle masses in supersymmetric calculations.
* The amount of CKM matrix quark type mixing between generations, is approximately, but not exactly, the same in the two kinds of possible first to second generation transitions, and in the two kinds of possible second to third generation transitions, respectively, and the probability of first to third generation transitions is approximately the same as first to third generation transitions.
* It is possible within existing experimental margins of error to describe the CKM matrix with one or two fewer parameters than the SM requires.
* Quarks and charged leptons of the same type except for fermion generation get heavier as they get higher in generation and it appears likely that this is true in the case of neutrinos as well.
* All massive fundamental particles in the SM have weak force interactions, while all massless fundamental particles in the SM do not have weak force interactions.
* It has been hypothesized that the CKM matrix parameters and PMNS matrix parameters are related to each other in "quark-lepton complementarity".
* The electron rest mass is very close to the electron self-energy.
* The ratio of the charged fermion masses to the neutrino masses is on the same order of magnitude as the ratio of the strong force coupling constant to the weak force coupling constant.

The point of this post, to be clear, isn't to propose that any of these relationships are anything more than coincidence and "numerology". And, the list above is hardly exhaustive. I have a whole folder of bookmarked pre-prints and papers containing similar attempts. The point is that the Standard Model constants show enough patterns to suggest that they probably aren't really arbitrary as the Standard Model implicitly assumes. But nobody knows why those relationships and patterns emerge from a more fundamental theory with any confidence or widespread acceptance.

Still, most people informed about the matter think that the fundamental constants of the Standard Model, rather than being truly fundamental, are actually derived from some deeper relationships in a simpler more fundamental theory with fewer experimentally measured free parameters.

String theory is the most popular version of such a theory but has not developed to a point where it can offer any real guidance on any of the problems that a deep theory might solve yet. See, e.g., this example of a theoretical particle physicist seeing string theory as a possible explanation citing this paper.

The process of connecting these dots is an ongoing one aided by efforts to more exactly measure these Standard Model parameters which in turn makes it possible to rule out at statistically significant levels some explanations for how these parameters came to be, while focusing greater attention on others.

6. Why is the Standard Model the way it is in other respects?

In a similar vein, we would ideally like to know why the laws of nature except gravity seem to be described by a SU(3)xSU(2)xU(1) group theory and how quantum gravity fits into this mix. Things looked promising when electroweak unification theory helped to explain the SU(2)xU(1) part of the Standard Model in a unified manner. But a Grand Unified Theory that combined all three Standard Model forces into a deeper unified theoretical structure has proven a tougher nut to crack.

There are also a number of matters described as "unsolved problems in physics" are efforts (largely futile) to explain why Standard Model parameters differ from hypotheses about what values these parameters should take in some implicit general type of theory about how they arise.

These include "the strong CP problem" (why doesn't the strong force have charge parity violations the way that the weak force and neutrino oscillations do), "the hierarchy problem" (why do Standard Model masses have such a narrow range of values when one way of formulating the factors that add up to these values involves such huge numbers), the "naturalness problem" (why are certain numerical values of constants and ratios so far from being on the order of one), and the baryon matter-antimatter asymmetry of the universe (why are baryons and charged leptons in our universe overwhelmingly matter and not antimatter if we know of only one very high energy process that violates baryon number and lepton number and CP violation rates are too modest to explain the disparity, if the Big Bang arose from pure energy with a zero baryon number and a zero lepton number).

7. How can the Standard Model be reconciled with gravity?

While the Standard Model incorporates special relativity, it is inconsistent at a basic, theoretical and mathematical level with classical General Relativity which explains gravity.

Most people think that the solution is to formulate a theory of quantum gravity, rather than to change the Standard Model significantly, but that is an open question.

The omission of quantum gravity from the Standard Model means that some aspects of the Standard Model are almost certainly slightly wrong, however, because in Standard Model physics, calculations of almost everything implicate almost everything else at some level, due to higher loop Feynman diagrams that incorporate every possible way that something could happen.

For example, the muon g-2 problem which is to calculate a property of the fundamental fermion known as the muon, can be calculated 99.994% from QED alone. But, for the last 0.006% of the value, the Standard Model calculation requires consideration of the strong force, even though muons themselves have no tree level strong force interactions. Indeed, the strong force contribution to the value of muon g-2 turns out to be much greater than the weak force contribution, even though the muon does have tree level interactions via the weak force.

In ordinary applications in laboratory conditions, the difference between the Standard Model ignoring quantum gravity loops and the Standard Model as we know it, should be smaller than we can measure and is thus safe to ignore because the strength of the force of gravity relative to the strength of the three Standard Model forces is so weak at a particle physics scale.

But in some applications, like the running of the experimentally determined constants of the Standard Model at high energies, the quantum gravity contribution should be material.

For example, one famous result of Standard Model physics is that gauge coupling constant unification does not occur at high energies, unlike the way it was initially through to in the minimal supersymmetric model (that hypothesis, widely touted in textbooks, has since been determined to be untrue based upon current measurements of those constants; if you try and predict the strong coupling at the Z mass this way, you get 0.129 +/- 0.002, whereas the measured value is 0.119 +/- 0.002. See, e.g., pages 38-40 and 57-59 of this presentation).

But it isn't inconceivable that the Standard Model coupling constants could undergo gauge coupling constant unification at high energies due to their running with energy scale if quantum gravity adjustments to the beta functions of the three Standard Model coupling constants (which can be determined exactly from theory and govern the running of these constants with energy scale).

Another set of phenomena that can't be explained with the "Core Theory" of general relativity and the Standard Model combined is the phenomena attributed to "dark matter". This requires either new particles and forces not found in the Standard Model, or a modification of gravity as expressed in general relativity, or both. The source of dark matter phenomena tells us whether there is an experimentally observed observational motive for devising new fundamental particles or not.

If dark matter is made out of beyond the Standard Model particles interacting possibly via beyond the Standard Model forces, then we need to explore how to extend the Standard Model to account for dark sector particles and forces (this is one of the strongest remaining motivations for supersymmetry theories).

On the other hand, if a theory of quantum gravity, for example, finds a way to explain dark matter and dark energy phenomena that astronomers see without resort to beyond the Standard Model particles and forces and can explain dark matter and dark energy as quantum gravity effects omitted from classical formulations of general relativity, then the case that the Standard Models' collection of fundamental particles and forces is complete (apart from gravity and a graviton to carry this force between Standard Model particles) is much stronger, so the observational evidence motivation to search for beyond the Standard Model particles is greatly decreased.

Also, neither the Standard Model, nor general relativity, provide an explanation for cosmological inflation in the wake of the Big Bang despite astronomy evidence supporting this hypothesis. This could require new physics at extremely high almost immediately after the Big Bang energies.
 
Last edited:
  • Like
  • Informative
Likes andresB, Amrator, atyy and 13 others
  • #5
ohwilleke said:
...
Wow, that's one detailed post, didn't expect that, thanks!
 
  • Like
Likes ohwilleke
  • #6
On another tack, here is what actual scientists are looking for at BESIII which is one of the most cutting edge high energy physics experiments in the world (that isn't anywhere near Switzerland). Mostly, this boils down to studying the formation and decays of unstable high energy hadrons (i.e. composite quark and gluon particles other than protons and neutrons) which are described mostly using perturbative QCD.

[Submitted on 5 Jan 2020]
The BESIII Physics Programme
Chang-Zheng Yuan (IHEP & UCAS), Stephen Lars Olsen (UCAS)
The standard model of particle physics is a well-tested theoretical framework, but there are still a number of issues that deserve further experimental and theoretical investigation. For quark physics, such questions include: the nature of quark confinement, the mechanism that connects the quarks and gluons of the standard model theory to the strongly interacting particles; and the weak decays of quarks, which may provide insights into new physics mechanisms responsible for the matter-antimatter asymmetry of the Universe. These issues are addressed by the Beijing Spectrometer III (BESIII) experiment at the Beijing Electron-Positron Collider II (BEPCII) storage ring, which for the past decade has been studying particles produced in electron-positron collisions in the tau-charm energy-threshold region, and has by now accumulated the world's largest datasets that enables searches for nonstandard hadrons, weak decays of the charmed particles, and new physics phenomena beyond the standard model. Here, we review the contributions of BESIII to such studies and discuss future prospects for BESIII and other experiments.
Comments:27 pages, 7 figures, 2 tables. Published version with a few references updated
Subjects:High Energy Physics - Experiment (hep-ex); High Energy Physics - Phenomenology (hep-ph)
Journal reference:Nature Rev.Phys. 1 (2019) no.8, 480-494
DOI:10.1038/s42254-019-0082-y
Cite as:arXiv:2001.01164 [hep-ex]
(or arXiv:2001.01164v1 [hep-ex] for this version)

Also, don't be bamboozled by the line: "insights into new physics mechanisms responsible for the matter-antimatter asymmetry of the Universe."

This is code for "we're studying CP invariance violations and looking for signs of flavor changing neutral currents", even though we already know to an extremely high degree of certainty from prior experiments that no CP invariance violations that BESIII is capable of observing at the hadron mass energy scales that it operates at are even remotely capable of explaining the physics mechanisms responsible for the matter-antimatter asymmetry of the Universe. They are simply too small and also can't lead to matter-antimatter asymmetry unless there are also baryon number and lepton number non-conservation interactions (such as flavor changing neutral currents, neutrinoless double beta decay, and proton decay) going on at rates far greater than previous experiments at the BESIII energy scale have already completely ruled out.

But explaining the matter-antimatter asymmetry of the Universe sounds cooler than trying to explain the charge parity concept or baryon number or lepton number to reporters whose stories might reach people willing to fund scientific experiments. So, press releases and abstracts about studies of CP invariance violations (and also flavor changing neutral currents, which are the main kind of baryon number or lepton number violating interactions which could be observed at colliders if they actually occurred, contrary to the Standard Model prediction) routinely contain these stock pronouncement.
 
Last edited:
  • Love
Likes AndreasC
  • #7
This is great, I wasn't even aware of this experiment.
 
  • #10
haushofer said:
Love.
Fair enough.
 
  • #11
What it fails to tell is whether the big bang's originates from dense high temperature matter or from closely packed photons with mass in the form of leptons and gluons resulting from photon-photon collisions, the biblical explanation.
 
  • Skeptical
Likes ohwilleke and weirdoguy
  • #12
nettleton said:
What it fails to tell is whether the big bang

Standard Model alone cannot say much about the Big Bang because it is a quantum field theory on Minkowski spacetime. Big Bang is extremely non-Minkowskian so it's out of scope of Standard Model.
 
  • Like
Likes ohwilleke
  • #13
There is a 7.9 sigma tension between different measurements of the absolute value of V(cb) in the CKM matrix.

The global fit is (41.47 ± 0.70) *10^-3 but a paper with a new analysis of the data (from neutral B meson to negatively charged vector D mesons) says (35.10 ± 0.41) * 10^-3, although a different choice of estimates (exploiting a pre-existing tension between exclusive and inclusive measurements) reduces, but does not eliminate, the strong tension.
 
  • #14
ohwilleke said:
There is a 7.9 sigma tension between different measurements of the absolute value of V(cb) in the CKM matrix.

The global fit is (41.47 ± 0.70) *10^-3 but a paper with a new analysis of the data (from neutral B meson to negatively charged vector D mesons) says (35.10 ± 0.41) * 10^-3, although a different choice of estimates (exploiting a pre-existing tension between exclusive and inclusive measurements) reduces, but does not eliminate, the strong tension.
To be honest, I don't think I understood anything you just said lol.
 
  • #15
AndreasC said:
To be honest, I don't think I understood anything you just said lol.

In simpler language, there is a strong tension of high statistical significance (the exact amount of which is debatable) between different measurements of one of the two dozen or so experimentally measured constants in the Standard Model of Particle Physics. This development is very new (this week) so there hasn't been much discussion of the issue yet. It is likely to be a question of measurement error somewhere, rather than of new physics.
 
  • Like
Likes AndreasC
  • #17
sbrothy said:

This one is kind of boring (there have been perhaps half a dozen or a dozen similar studies preceding it). It tests a hypothesis that is worth testing, but well confirmed.

In the Standard Model, the two dozen or so experimentally measured constants of the Standard Model are (at any given momentum transfer scale) constant. But ordinarily we can only measure that in labs today. And, humans have only done precision science for a few hundred years.

How do we know that physical constants really stay the same except by assumption?

We need to figure out ways to test that.

One way to do it is to look an light from old stars that has properties that are sensitive to the value of a Standard Model experimentally measured constants, such as the electromagnetic coupling constant and to compare the predicted value to the observed one.

This study did that and like all previous efforts, confirmed that the electromagnetic coupling constant of the Standard Model had the same value then that it does now, to the limit of astronomy observation precision, which is pretty decent given the methods (+/- about 5%), although nowhere near what would be possible in a direct measurement in a laboratory today (parts per billion or so).

We do expect that the value of the electromagnetic coupling constant would have been different in the very early universe shortly after the Big Bang, because the energy level would be so high. But we don't have any good means of looking back to more than about 300,000 years after the Big Bang directly (in round numbers) when the background radiation temperature of the Universe was about 60 degrees Kelvin (i.e. about -212º C), and we can actually estimate the value of the electromagnetic coupling constant only for much, much more recently emitted light. This study looked at 58 objects with redshift range 0.2 ≤ z ≤ 1.5 where redshift z=1.5 corresponds to 3.5 billion years ago and about 10,000,000,000 years after the Big Bang (in round numbers).

To see anything really cool due to the running of the Standard Model parameters with energy scale we'd need to get to energy scales we can't reproduce in particle accelerators, which would be less than one second after the Big Bang (i.e. the quark epoch).

Incidentally, another constant we can test (even though it isn't fundamental itself) is the pion mass. A fairly small tweak up in the pion mass from say 135 MeV to 300 MeV, causes the dineutron a.k.a. neutronium (a bound state of two neutrons) to be stable. This atom, since it would have no protons, unlike all other atoms, (making it element zero) would have no electrons since the nucleus would be electromagnetically neutral overall. And, since the mass of the pion is predominantly due to the strong force coupling constant (total 135 MeV of which about 7-10 MeV is due to quarks), the existence or non-existence of neutronium is a way to know if there was a significant change in the strong force coupling constant.

The existence of stable neutronium would wildly disturb Big Bang Nucleosynthesis whose predictions are very nearly reproduced by observation and should have taken place 10-20 minutes after the Big Bang. Even smaller changes in the pion mass (in either direction, not just one direction) which is the main carrier of the nuclear binding force between atoms would dramatically throw off Big Bang Nucleosynthesis, so we can rule out with some confidence any material differences in the strong force coupling constant at any time after 10 minutes after the Big Bang.

Big Bang Nucleosynthesis also constrains lots of other Standard Model parameters to not have changed too much in the time frame since 10 minutes after the Big Bang. For example, we can likewise rule out really huge changes (on the order of something a bit less than a factor of 10-100) in the up and down quark masses.

There are also Standard Model parameters, however, which it is difficult or impossible to confirm were the same long, long ago. For example, it is very hard to make statements about what the masses of the top and bottom quarks were billions of years ago from observational evidence. But, the fact that we can confirm that some of them didn't change makes us feel better about the hypothesis that none of them did.
 
Last edited:
  • Like
Likes AndreasC and sbrothy
  • #18
ohwilleke said:
This one is kind of boring (there have been perhaps half a dozen or a dozen similar studies preceding it). It tests a hypothesis that is worth testing, but well confirmed.
[...]

Yes... That kinda dawned on me after I posted the link and once I read it a little more carefully. Hence the added "Hm." No real news there. It's a somewhat recent paper though and about the fine structure constant no less - or 'no more' really :)

Regards.
 
  • #19
A couple more in the same vein today:

Submitted on 25 Aug 2020]
Constraints on the electron-to-proton mass ratio variation at the epoch of reionization
S.A. Levshakov, M.G. Kozlov, I.I. Agafonova
Far infrared fine-structure transitions of CI and CII and rotational transitions of CO are used to probe hypothetical variations of the electron-to-proton mass ratio mu = m_e/m_p at the epoch of reionization (z > 6). A constraint on Delta mu/mu = (mu_obs - mu_lab)/mu_lab = (0.7 +/- 1.2)x10^-5 (1sigma) obtained at <z> = 6.31 is the most stringent up-to-date limit on the variation of mu at such high redshift. For all available estimates of Delta mu/mu ranging between z = 0 and z = 1100, - the epoch of recombination, - a regression curve Delta mu/mu = k_mu (1+z)^p, with k_mu = (1.6 +/- 0.3) x10^-8 and p = 2.00 +/- 0.03, is deduced. If confirmed, this would imply a dynamical nature of dark matter/dark energy.
Comments:11 pages, 3 figures, 3 tables. Accepted for publication in MNRAS
Subjects:Cosmology and Nongalactic Astrophysics (astro-ph.CO); Astrophysics of Galaxies (astro-ph.GA)
Cite as:arXiv:2008.11143 [astro-ph.CO]
(or arXiv:2008.11143v1 [astro-ph.CO] for this version)

[Submitted on 24 Aug 2020]
A new era of fine structure constant measurements at high redshift
Dinko Milaković, Chung-Chi Lee, Robert F. Carswell, John K. Webb, Paolo Molaro, Luca Pasquini
New observations of the quasar HE0515−4414 have been made using the HARPS spectrograph on the ESO 3.6m telescope, aided by the Laser Frequency Comb (LFC). We present three important advances for α measurements in quasar absorption spectra from these observations. Firstly, the data have been wavelength calibrated using LFC and ThAr methods. The LFC wavelength calibration residuals are six times smaller than when using the standard ThAr calibration. We give a direct comparison between α measurements made using the two methods. Secondly, spectral modelling was performed using Artificial Intelligence (fully automated, all human bias eliminated), including a temperature parameter for each absorption component. Thirdly, in contrast to previous work, additional model parameters were assigned to measure α for each individual absorption component. The increase in statistical uncertainty from the larger number of model parameters is small and the method allows a substantial advantage; outliers that would otherwise contribute a significant systematic, possibly corrupting the entire measurement, are identified and removed, permitting a more robust overall result. The zabs=1.15 absorption system along the HE0515−4414 sightline yields 40 new α measurements. We constrain spatial fluctuations in α to be Δα/α≤9×10−5 on scales ≈20kms−1, corresponding to ≈25kpc if the zabs=1.15 system arises in a 1Mpc cluster. Collectively, the 40 measurements yield Δα/α=−0.27±2.41×10−6, consistent with no variation.
Comments:Submitted for publication to MNRAS. 10 pages, 7 figures. Online supplementary material is available upon request
Subjects:Cosmology and Nongalactic Astrophysics (astro-ph.CO)
Cite as:arXiv:2008.10619 [astro-ph.CO]
(or arXiv:2008.10619v1 [astro-ph.CO] for this version)
 
  • Like
Likes Amrator and Lord Crc
  • #20
That second one certainly seems more solid than my feeble contribution. I've only skimmed it so far (and it's no secret that I'm not exactly PhD material), but I'm not entirely mistaken if I conclude no new physics was found am I? ;(

EDIT:
If they're ushering in a new 'era' though there may be more to it than meets my eyes at first glance...

EDIT2:
Sorry for being lazy (I will read as much as I can understand) but is AI automation the new era they're talking about?

EDIT3:
As an aside, and I know this isn't really the place, I've noticed that AI and machine learning is on the rise. I read some corona-related papers where the use of ML (and possibly deep learning) has eliminated control groups, cutting a lot of time from the vaccine development process.

Heh, this was pretty hard to dig up again:

https://arxiv.org/abs/2003.12454
 
Last edited:
  • #21
Wow, didn't expect this thread to have any lasting value hahaha
 
  • Like
Likes sbrothy
  • #22
sbrothy said:
That second one certainly seems more solid than my feeble contribution. I've only skimmed it so far (and it's no secret that I'm not exactly PhD material), but I'm not entirely mistaken if I conclude no new physics was found am I? ;(

EDIT:
If they're ushering in a new 'era' though there may be more to it than meets my eyes at first glance...

EDIT2:
Sorry for being lazy (I will read as much as I can understand) but is AI automation the new era they're talking about?

EDIT3:
As an aside, and I know this isn't really the place, I've noticed that AI and machine learning is on the rise. I read some corona-related papers where the use of ML (and possibly deep learning) has eliminated control groups, cutting a lot of time from the vaccine development process.

Heh, this was pretty hard to dig up again:

https://arxiv.org/abs/2003.12454

The conclusion is no new physics. The electron-proton mass ratio remained unchanged. The revolution is in the instrumentation and data analysis developed that allowed that conclusion to be reached (which is really pretty cool).

The growth of AI and machine learning is a thing, but is off topic so I won't address it.
 
  • Like
Likes sbrothy
  • #23
I've opened a discussion of a new set of preprints discussing outstanding issues in cosmology in this Physics Forum thread.
 
Last edited:
  • Like
Likes sbrothy and Buzz Bloom
  • #24
ohwilleke said:
In a similar vein, we would ideally like to know why the laws of nature except gravity seem to be described by a SU(3)xSU(2)xU(1) group theory and how quantum gravity fits into this mix.
If we compactify the doublet of Minkowski spaces with an inverse metric, then we obtain the gauge group SU(3)xSU(2)xU(1) as the symmetry group of the compactified istropic cone of an 8-dimensional space with a neutral metric. Maybe the new physics needs to "dance" from here?
 
  • #25
bayakiv said:
If we compactify the doublet of Minkowski spaces with an inverse metric, then we obtain the gauge group SU(3)xSU(2)xU(1) as the symmetry group of the compactified istropic cone of an 8-dimensional space with a neutral metric. Maybe the new physics needs to "dance" from here?

My goal here is to outline the problems, not the solutions. In turn, developing a GUT or TOE is near the bottom of my list of priorities personally. it hasn't been very fruitful and has little observational evidence to guide it.
 
Last edited:
  • #26
ohwilleke said:
My goal here is to outline the problems, not the solutions. In turn, developing a GUT or TOE is near the bottom of my list o priorities personally. it hasn't been very fruitful and has little observational evidence to guide it.
What do you consider more fruitful?

Also yeah, that was part of the motivation of my question, because it seems really hard to me to develop a better theory if the one we already have already explains almost everything we observe.
 
  • #27
AndreasC said:
Also yeah, that was part of the motivation of my question, because it seems really hard to me to develop a better theory if the one we already have already explains almost everything we observe.
The fact of the matter is that the theory that explains almost everything is not a theory of everything, but I propose to start from geometry, which generates both the pseudo-Riemannian metric, and gauge groups, and the matrix Dirac algebra. This means that the shape of an isotropic cone defines the metric of a pseudo-Riemannian space, while the isomorphisms of a compactified isotropic cone define a group of gauge symmetries, and the algebra of linear vector fields of an 8-dimensional space with a neutral metric coincides with the matrix Dirac algebra.
 
  • #28
bayakiv said:
The fact of the matter is that the theory that explains almost everything is not a theory of everything
Right. But if it explains most observations, then you're not left with much to work with to go forward... That's why I find interesting what is available and what various people think is the way forward.
 
  • #29
AndreasC said:
Right. But if it explains most observations, then you're not left with much to work with to go forward...
To move forward in the creation of new physics, you should not grab the crutches of the old physics. The main thing here is a breakthrough idea, but as always happens - the new is the well-forgotten old. That is why I propose to take a closer look at the evolving vector field of the seven-dimensional sphere.
 
  • #30
AndreasC said:
What do you consider more fruitful?

Also yeah, that was part of the motivation of my question, because it seems really hard to me to develop a better theory if the one we already have already explains almost everything we observe.

Research motivated by dark matter and dark energy phenomena observations and constraints

We have at least one set of problems for which the evidence unequivocally is screaming at us that Core Theory is wrong (or more likely, incomplete): dark matter and dark energy phenomena (and also the need to make GR and the SM mathematically compatible). This is where the observational evidence is telling us that we need to do something.

In any particle based dark matter theory, you need to add at least one new fundamental particle to the SM. But after a couple of decades of astronomy observations, direct searches, N-body simulations, and particle physics searches for new particles, there is a very strong case to be made that at a minimum, any dark matter particle needs to have a dark sector self-interaction, and really beyond that, strong evidence that there needs to be some sort of feeble, non-gravitational interaction between ordinary baryonic matter and dark matter particles. So, you really need not one, but two new fundamental particles to add to the SM in a dark matter particle based explanation of dark matter phenomena that are undeniable and huge inconsistencies with GR and the SM that can only be solved with new physics.

If you go down the route of a graviton field based quantum gravity theory (as distinct from a theory in which you make space-time itself discrete in some sense), which is the conventional wisdom regarding the correct answer, moreover, you need something, quite possibly with a third BSM particle or other new physics, to explain dark energy phenomena, because the global cosmological constant solution that is an utterly natural integration constant in classical GR is very hard to translate into a graviton field based quantum gravity theory.

Even if you go down the gravity modification path without dark matter particles, to explaining dark matter and dark energy phenomena, you still need to find a way to do GR on a quantum gravity basis consistent with the SM mathematically, and in many gravity modification theories you need not just one massless spin-2 graviton creating a tensor field (as in GR) but also a vector field and a scalar field, presumably associated with BSM spin-0 and spin-1 gravitons.

The SM can't fit any of that data. GR can't fit that data. There is a dominant observational evidence shortcoming in both the SM and GR that arises from dark matter and dark energy phenomena. There are lots and lots of experimental constraints from that observational evidence regarding what the solution can't be.

Pursuing an observational evidence driven agenda to come up with new physics to explain the new physics that most definitely is not explained by SM and GR is by far a more fruitful approach than looking at the mathematical structures we've already inferred, and tinkering with them from the same foundation that has been available to every theorist in the field (who make up a decent share of the smartest people in a math/physics domain on the planet) for almost half a century, and expecting to make some novel break though.

Also, any time your new physics solutions generates lots of new particles and forces that we haven't observed, that should be a yellow flag so big it could be a piece of performance art. If your theory calls for three Higgs doublets a full array of superpartner particles, and seven new space-time dimensions that there is absolutely no positive evidence for whatsoever, you are probably on the wrong track, no matter how beautiful and elegant a mathematical structure it may be.

The fruitful approach is to focus on solving the problems we have and not the anomalies we wish we had, even if that means that your particular area of expertise and bag of solutions isn't very helpful for solving the problems that we know we have with the status quo.

Neutrino physics

Another big area where the SM does not explain everything is neutrino physics.

To some extent, the SM modified to include massive neutrinos does give us shut up and calculate class results subject to better determination of the seven experimentally measured parameters of the SM specific to neutrino physics (the absolute value of the three neutrino mass eigenstates and the four PMNS matrix parameters).

But there are lots of potential experimental anomalies that are in tension with this model, and the neat and clean Higgs field explanation for the masses of the other fundamental particles in the SM does not have a single, clear and obvious generalization to neutrinos which is why the original SM of the 1970 to the early 1990s assumed (wrongly, but not deeply wrongly at first approximation) that neutrinos were massless.

We are doing experiments here, as well, that can guide theory. We are looking a reactor experiments, cosmic ray detection experiments, neutrinoless double beta decay searches, searches for non-standard neutrino interactions at colliders, beta decay experiments trying to determine absolute neutrino mass, cosmology modeling, etc.

Putting the SM treatment of neutrinos on stronger footing would also potentially be a very fruitful pursuit, particularly if it is driven by observational evidence first.

Exotic hadrons

High energy physics has a closet full of hadron resonances that it does not have a consensus description or prediction of. One of the more fruitful possible ways to make progress in evaluating the validity of the SM and determining it QCD is sufficient to explain what we see, or if we need new physics, is to continue to study these exotic hadron resonances until a consensus on what the hadrons we've seen really are, and what hadrons are out there to be observed that we haven't seen yet, is an area where we have experimental evidence to guide theory.

These resonances are one of the best places to kick the tires and look for anomalies that could say that there is new physics out there, or could say that QCD needs no further modification.
 
Last edited:
  • Like
Likes AndreasC
  • #31
ohwilleke said:
Research motivated by dark matter and dark energy phenomena observations and constraints
That was one of my main instincts. But it seems like in recent decades it hasn't been the main focus, why do you think that is?
 
  • #32
AndreasC said:
That was one of my main instincts. But it seems like in recent decades it hasn't been the main focus, why do you think that is?

There are 34,672 papers at arVix that concern dark matter and those papers are found in almost every conceivably relevant subfield. It has definitely received a lot of attention. Astronomers have more to say than physicists in many other subfields, but I would strongly disagree that it hasn't been one of the main focuses in recent decades.

Indeed, as the LHC and prior successes of the SM have dried up other opportunities to discover new physics that still seemed possible a few decades ago, it has increasingly moved to center stage.

Of course, the latest new toy for physicists as a community, the LHC, is excellent for investigating the properties of the Higgs boson, and only a marginal improvement in searching for dark matter, so this and hadronic physics and lepton universality violations and a few anomalous experimental results there (most of which haven't panned out) have also taken up some of the spotlight, and there are only so many times that the public can get interested in null results from direct dark matter detection experiments or the like, even when those investigations produce lots of new scientific papers.
 
  • Like
Likes AndreasC
  • #33
ohwilleke said:
Astronomers have more to say than physicists in many other subfields, but I would strongly disagree that it hasn't been one of the main focuses in recent decades.
Yes, I worded it poorly. While there is a lot of attention paid to dark matter, the impression I get (which may be wrong) is that it isn't the main focus of physicists working on advancing the foundations of physics and going beyond the standard model. At least that is the impression I had, but I'm not basing it on anything concrete . I guess it makes sense that things have shifted after all the null results.
 
  • Like
Likes ohwilleke

1. What is the Standard Model?

The Standard Model is a theory in particle physics that describes the fundamental particles and their interactions. It is considered the most successful and well-tested theory of particle physics to date.

2. What are the most important phenomena that the Standard Model can't explain?

The Standard Model can't explain several phenomena, including the existence of dark matter, the matter-antimatter asymmetry in the universe, and the hierarchy of particle masses.

3. How does the Standard Model explain the fundamental particles?

The Standard Model categorizes fundamental particles into two groups: fermions and bosons. Fermions are matter particles, such as quarks and leptons, while bosons are force-carrying particles, such as photons and gluons.

4. Why is the Standard Model considered incomplete?

The Standard Model does not include gravity, which is a fundamental force in the universe. Additionally, it does not explain the observed hierarchy of particle masses or the existence of dark matter.

5. What are some proposed theories that aim to go beyond the Standard Model?

Some proposed theories that aim to go beyond the Standard Model include Supersymmetry, Grand Unified Theories, and String Theory. These theories attempt to address the shortcomings of the Standard Model and provide a more complete understanding of the fundamental particles and their interactions.

Similar threads

Replies
34
Views
2K
  • High Energy, Nuclear, Particle Physics
Replies
2
Views
1K
Replies
72
Views
5K
  • Beyond the Standard Models
Replies
1
Views
2K
Replies
1
Views
2K
Replies
4
Views
3K
  • Beyond the Standard Models
Replies
2
Views
2K
  • Beyond the Standard Models
Replies
30
Views
7K
  • High Energy, Nuclear, Particle Physics
Replies
4
Views
2K
Replies
1
Views
1K
Back
Top