# Naturalness: dimensionless ratios

Auto-Didact
The concept of naturalness as dimensionless ratios of parameters of order unity has recently come under criticism, most obviously because Sabine Hossenfelder wrote a book (Lost In Math) criticizing it.

Very recently however, Peter Shor and Lee Smolin had a discussion about this over at Peter Woit's blog, in which Smolin gave an explanation on 'why the order of unity' is important for the practice of physics, going back to Fermi and Feynman:
Peter Shor said:
@Lee:

When you say that “any pure dimensionless constants in the parameters of a physical theory that are not order unity require explanation,” you are implicitly putting a probability distribution on the positive reals which is sharply peaked at unity.

Doesn’t this assumption also require explanation? Why should the range of numbers between 1 and 2 be any more probable than the range between 10^10 and 10^20? Aren’t there just as many numbers in the range between 10^10 and 10^20? as there are between 1 and 2? (Uncountably many in each.)
Lee Smolin said:
Dear Peter Shor,

Yes, exactly, and let me explain where that expectation for dimensionless ratios to be order unity comes from.

Part of the craft of a physicist is that a good test of whether you understand a physical phenomena-say a scattering experiment-is whether you can devise a rough model that, with a combination of dimensional analysis and order of magnitude reasoning, gets you an estimate to within a few orders of magnitude of the measured experimental value. People like Fermi and Feynman were masters at this, a skill that was widely praised and admired.

The presumption (rewarded in many, many cases) was that the difference between such rough estimates and the exact values (which were by definition dimensionless ratios) were expressed as integrals over angles and solid angles, coming from the geometry of the experiment, and these always gave you factors like 1/2pi or 4pi^2, which were order unity.

Conversely, if your best rough estimate does not get you within a few orders of magnitude of the measured value, then you don’t understand something basic about your experiment.

Seen from the viewpoint of this craft, if your best estimate for a quantity like the energy density of the vacuum is 120 orders of magnitude larger than the measured value, the lesson is that we don’t understand something very basic about physics.

Thanks,

Lee
From Smolin's historical explanation, i.e. that naturalness is essentially a heuristic tool based on dimensional analysis making it a good strategy for quickly solving Fermi problems, I would say that adherence to naturalness is a pretty strong criteria for 'doing good physics'.

Andrea Panza, Charles Link, arivero and 2 others

Gold Member
Some science writers, like Sabine Hossenfelder, seem to argue that naturalness isn't an argument at all and one shouldn't worry about its absence. Most confusingly for me, even John Baez had a comment on one of these blogs about how naturalness does not make predictions. This seems totally counter to the modern Wilsonian effective field theory approach to particle physics. If one assumes some cutoff on the order of $10^{15}$ GeV and considers the leading marginal/irrelevant operators, one finds an explanation of the global symmetries present in the Standard Model, and one can even predict neutrino masses in the eV range (predicted by Weinberg by this precise line of thought in 1979). Indeed, all of the old pre-Wilsonian arguments for leaving out nonrenormalizable interactions are now instead arguments about naturalness, and we see that it has an incredibly impressive predictive power.

But it does seem like naturalness breaks down for a number of observables, and this requires a satisfactory explanation (and I do not find Hossenfelder's dismissal of naturalness satisfactory). Here's a recent discussion about this crisis which I did enjoy: https://arxiv.org/pdf/1710.07663.pdf

atyy and Auto-Didact
suprised
Hossenfelders crusade against particle physics is getting more ridiculous every day. In essence she promotes the end of science. It is like as if a remote forrest was discovered where all the trees have the same height up to one nanometer. If people say, this is unnatural, there must be some explanation for it, her answer would be: I can’t define probability here so any answer must be meangingless, people who try to find an explanation are misguided and I demand they should stop to work on this asap.

Fortunately no serious scientist pays attention to this, no matter how shrill her voice becomes.

atyy and weirdoguy
Gold Member
Funny example, suprised, as such coincidence should be called "natural" in our context. On the other hand, we have always the example of the excessively natural coupling of the top quark, which almost nobody worries about.

Auto-Didact
In essence she promotes the end of science.
The end of particle physics is hardly the end of science.
It is like as if a remote forrest was discovered where all the trees have the same height up to one nanometer. If people say, this is unnatural, there must be some explanation for it, her answer would be: I can’t define probability here so any answer must be meangingless, people who try to find an explanation are misguided and I demand they should stop to work on this asap.
I can see what you're trying to say, but this seems like a gratuitous strawman to me. Naturalness can be a good tool, but need not be; as Hossenfelder points out there certainly seems to be a problem with making some arguments from naturalness in particle physics.

As Smolin makes clear, this means that something very basic about physics is not understood i.e. the mathematics underlying particle physics is only valid to some certain extent and it's extrapolated accuracy becomes very questionable beyond that domain.

suprised
The end of particle physics is hardly the end of science.

The end of science in the sense of trying to find an explanation. This is about the scientific method. Essentially she claims that an observed fine tuning does not need to have an explanation, any observed numerical quantity is a good as any other one, so stop here, cancel experiments, it is all meaningless.

Another example: why are the pion masses so much smaller than the natural scale of QCD? Shouldnt be there an explanation? Hossenfelder’s advice would be, there is nothing wrong, why waste time on an ill-posed problem? End of science.

But of there is an explanation of this. Namely there is a broken symmetry behind and the mesons are (pseudo-) Goldstone bosons. That is the science, she, and anyone who would listen her, would miss.

Further example: almost vanising cosmological constant. Apart from all the usual arguments, here a simplified version: how come that the dynamics of the big bang was so foresighted that (leaving all other phenomena aside) after the QCD phase transition that would dump an order of 1GeV vacuum energy, the outcome is an incredible small number? Shouldnt be there an explanation of this, no?
Or where precisely is the fault asking this question in the first place?

H would say, I tune the boundary conditions as I like, so there is no problem for explaining this or any other number. Again, end of science!

Naturalness is not a tool, but a heuristic principle, that generally works very well in all sorts of contexts. It is a well-proven guideline and therefore it was a good idea to see how far we get by applying the most naive recipes of particle theory to the problem of Higgs mass etc. These ideas were too naive with hindsight, and didnt work, so this indeed created a sense of crisis in the community. But the right attitude is to work harder and be more clever and take more precise measurements, in order to find an explantion eventually. Rather than declaring there is no problem with hierarchies and finetuning, and stop doing science.

Even worse is to propagate this attitude and make a hullahoop all over the internet and media, out from a filter bubble that has little connection to real science. This actually points to another problem, but that has nothing to do with physics: it is the phenomenon “failed physicist sees physics fail”, which is all-too-common in the blogosphere.

Gold Member
Even worse is to propagate this attitude and make a hullahoop all over the internet and media, out from a filter bubble that has little connection to real science. This actually points to another problem, but that has nothing to do with physics: it is the phenomenon “failed physicist sees physics fail”, which is all-too-common in the blogosphere.

This is my real problem with Hossenfelder. Take, for example, her recent response to Jamie Farnes' paper: http://backreaction.blogspot.com/2018/12/no-negative-masses-have-not.html. Farnes did not write a press release on his paper as soon as he posted it to arXiv; rather, Farnes waited until it had gone through peer review and was published before giving a writeup of his work for the general public.

Hossenfelder then writes a post on her blog, which has a large layman audience, tearing it apart. Is this what she considers the correct avenue for scientific discourse? Farnes did the correct thing and waited for the work to pass peer review; at the very least Hossenfelder should have written up a note to arXiv (like Brian Skinner did in response to the recent claim of high-Tc superconductivity) before going public with her issues with the paper. Instead she sows doubt among the general public without engaging in the scientific process. It's very dangerous, because we do not need popular opinion - which is more easily swayed by charisma and persuasion rather than technical arguments - to be what determines our research directions and grant funding.

Last edited:
romsofia, Klystron, Vanadium 50 and 1 other person
Auto-Didact
It's very dangerous, because we do not need popular opinion - which is more easily swayed by charisma and persuasion rather than technical arguments - to be what determines our research directions and grant funding.
I agree that it is very dangerous, but I do not necessarily agree that it isn't needed. As Hossenfelder, and legions of other scientists (not merely physicists) have pointed out, there is today a problem with the self-corrective mechanism of science to put it mildly. Other areas of professional human endeavors (such as law, medicine, government and business) have faced similar problems.

Scientists are no exception: scientists display group behavior with all its ugly consequences. The problem with physicists (especially theoreticians) in contrast to practitioners in all other professional endeavors, is that the actual validity of their work usually has no direct consequences for others or themselves. This can cause the physicist, especially once he starts to realize he can get away with it, to adopt a somewhat care-free, cavalier attitude w.r.t. his own work and even the work of his 'friends'; this leads to a large many physicists who know how to game the academic system.

From those other fields, there is a surefire way to combat this kind of behavior: timed or at random inspections (think quality assurance, internal affairs, etc) actively challenging workers, by publically reviewing their work and having them openly defend it: if they cannot justify their work adequately, they will get penalized accordingly. Usually this is done by independent groups of experts in the same field in order to make sure that their judgement is valid.

If every practitioner (or just the large majority) now comes to believe that in their day-to-day work they are constantly at risk of having this happen to their work (Panopticon), the idea is that the practitioners will begin to self-regulate their behavior by acting more careful in their work and more responsible by notifying colleagues who might be at risk - and therefore putting the entire department at risk.

In essence this is just an expansion of the peer-review process, albeit one far less opaque and far less susceptible to corruption; it goes without saying this expanded process isn't perfect. In any case it is an empirically proven method for implementing behavioral self-correction in groups of people. In clinical medicine for example, this is how quality assurance of care and professional expertise is maintained among physicians.

Staff Emeritus
2021 Award
This is my real problem with Hossenfelder.

Mine is that she describes people she has scientific differences with as "liars". She has descended into Lubos-land. I simply don't take her criticism seriously any more.

atyy and king vitamin
Gold Member
The problem with physicists (especially theoreticians) in contrast to practitioners in all other professional endeavors, is that the actual validity of their work usually has no direct consequences for others or themselves.

What criterion for "validity" are you using for this? And what consequences do you propose for theoreticians breaching your rules of validity?

I have seen many theorists - some with Nobel prizes - discredited within academia for their contributions being low-caliber. As far as I can tell, the old guard is not sacrosanct.

This can cause the physicist, especially once he starts to realize he can get away with it, to adopt a somewhat care-free, cavalier attitude w.r.t. his own work and even the work of his 'friends'; this leads to a large many physicists who know how to game the academic system.

This it totally unfamiliar to my experience as a research physicist - in fact I can thing of counterexamples. Can you please give explicit example before such a damning accusation?

From those other fields, there is a surefire way to combat this kind of behavior: timed or at random inspections (think quality assurance, internal affairs, etc) actively challenging workers, by publically reviewing their work and having them openly defend it: if they cannot justify their work adequately, they will get penalized accordingly. Usually this is done by independent groups of experts in the same field in order to make sure that their judgement is valid.

I'm just confused by this entire paragraph. All work in science is peer reviewed, period. What are you talking about? What "groups of experts in the same field" do you propose who aren't already doing all the peer reviewing? It's the same people!

If every practitioner (or just the large majority) now comes to believe that in their day-to-day work they are constantly at risk of having this happen to their work (Panopticon), the idea is that the practitioners will begin to self-regulate their behavior by acting more careful in their work and more responsible by notifying colleagues who might be at risk - and therefore putting the entire department at risk.

In essence this is just an expansion of the peer-review process, albeit one far less opaque and far less susceptible to corruption; it goes without saying this expanded process isn't perfect. In any case it is an empirically proven method for implementing behavioral self-correction in groups of people. In clinical medicine for example, this is how quality assurance of care and professional expertise is maintained among physicians.

I'm not going to lie - the more your line of though goes on, the more it resembles the Cultural Revolution rather than an actual scientist interested in the truth. (And I feel the need to say that I am a leftist who is not using the Cultural Revolution as a red scare tactic: I really do feel like this is an anti-intellectual attack.)

More directly: who is it that you think should judge what constitutes good cosmology paper, if not the peer reviewers or the relevant journals? Who are these "independent groups of experts in the same field" (as though they are not refereeing journals already)?

If you could not tell from my previous paragraphs (and the context of our conversation), I'm worried that you want these fields to be judged by those who are not experts.

romsofia
This is my real problem with Hossenfelder. Take, for example, her recent response to Jamie Farnes' paper: http://backreaction.blogspot.com/2018/12/no-negative-masses-have-not.html. Farnes did not write a press release on his paper as soon as he posted it to arXiv; rather, Farnes waited until it had gone through peer review and was published before giving a writeup of his work for the general public.

Hossenfelder then writes a post on her blog, which has a large layman audience, tearing it apart. Is this what she considers the correct avenue for scientific discourse? Farnes did the correct thing and waited for the work to pass peer review; at the very least Hossenfelder should have written up a note to arXiv (like Brian Skinner did in response to the recent claim of high-Tc superconductivity) before going public with her issues with the paper. Instead she sows doubt among the general public without engaging in the scientific process. It's very dangerous, because we do not need popular opinion - which is more easily swayed by charisma and persuasion rather than technical arguments - to be what determines our research directions and grant funding.

I'm not at all a fan of Hossenfelder's recent criticisms of physics, but I think it's perfectly fine for her to express her views on her blog and book. I myself became interested in string theory mainly after the Smolin and Woit books.

BTW, I found your comment on naturalness and the Wilson viewpoint interesting. Preskill, in his eulogy, does wonder whether Wilson could have steered us wrong in this case. https://quantumfrontiers.com/2013/06/18/we-are-all-wilsonians-now/

I should of course point out that if Wilson was wrong here, that does not mean Hossenfelder is right (ie. I largely agree with you that what she writes is hostile to good science).

Last edited:
Auto-Didact
Auto-Didact
Before I continue, I need to make a short digression to make crystal clear that my suggestions in principle have nothing whatsoever to do with nonsensical SJW measures, equal social diversity rates or unproven/counterproductive anti-sexual harassment proposals. Carrying on.
What criterion for "validity" are you using for this? And what consequences do you propose for theoreticians breaching your rules of validity?
My rough operationalization of validity is w.r.t. conclusions of the researcher based on expert appraisal of the quality and originality of the researcher's chosen methodology. The experts should consist of a 'jury' of, let us say about 5 to 8, independent active practitioners in that same subfield working at least 15 years, picked at random from the pool of all experts in that subfield.

I am definitely not the one to decide what validity is; this panel of experts in literally every (sub)field of physics would need to somewhat aristocratically decide every few years what constitutes quality in new research and periodically publish these recommendations as guidelines for the subfield: I would merely suggest their independent judgements to be ordinally ranked, e.g. as 'high quality', 'average quality', 'low quality', and explicitly explained how this reflects the expert opinion within the subfield and why.

The actual establishment of what constitues quality and originality is far more difficult to establish than it seems to be, since quality can vary over time both in and between subfields and originality is even more slippery; there are already highly advanced methodologies and measures invented to measure exactly such things, e.g. topological data analysis and dynamic network analysis directly comes to mind. Moreover, if some research is of a highly interdisciplinary nature it might need to be judged both by experts in both fields as well as by a combination of experts in both fields.

To make this more concrete I will give an example: what constituted high quality research methodology in optomechanics in 2010 doesn't necessarily constitute high quality research methodology in the same subfield in 2018, nor does it necessarily constitute high quality research methodology in another subfield of physics, e.g. in high temperature superconductivity. There are even curious differences, e.g. a novel mathematically advanced methodology invented in high energy physics may literally exist under another name in other older fields such as fluid dynamics and even be regarded as pedestrian within that subfield, since their own methodologies have strongly evolved since.
I have seen many theorists - some with Nobel prizes - discredited within academia for their contributions being low-caliber. As far as I can tell, the old guard is not sacrosanct.
I agree that this is a problem, which is exactly why I believe when judging validity of conclusions one needs to take into account both quality and originality. Moreover, it might be that anyone who has a Nobel Prize would need to get judged in an altogether different manner than non-Nobel laureates.
This it totally unfamiliar to my experience as a research physicist - in fact I can thing of counterexamples. Can you please give explicit example before such a damning accusation?
I have done research in multiple fields in science (physics, neuroscience, economics, data science, medicine, psychology). In all of them I have invariably seen many researchers and practitioners - both consciously and unconsciously - take shortcuts and cut corners at times, for a variety of reasons: lack of time, frustration with co-workers, not receiving payment for a particular aspect of work, hyping work purely to get funds, work towards achieving performace indices instead of actually trying to perform high quality work, choose successful lower risk strategies of known low utility even when there are more promising strategies available but which have a higher risk meaning a smaller chance of publishing in a high impact journal and therefore choose for their career instead of for bettering science, not publishing negative findings, not speaking up against results in fear of risking their position/careers etc.

My intention is not to judge any researchers, but instead to make them aware that, despite any intentions, they are human and that they are therefore susceptible to the same biases and behavioral traits as other humans. This also means that the directly perceived consequences and appreciation of their work by not only other researchers but also in general has an effect on how they do their work.
I'm just confused by this entire paragraph. All work in science is peer reviewed, period. What are you talking about? What "groups of experts in the same field" do you propose who aren't already doing all the peer reviewing? It's the same people!
There is an argument to be made that the peer review system as is a bit too opaque. This invariably leads to clique formation, i.e. counterproductive group behavior making the experts more homogeneous in thinking and behavior than is warranted or reflective of actual practice. The usually used performance indices such as citation indices cannot control for this either since they are far too simplified, focussing mostly on the productivity of individual researchers in terms of papers and short term progress of a subfield than on subfields and the longer term picture; moreover, the advanced measures required cannot be properly chosen or wielded by administrators or regulators if they do not possesses the necessary mathematical background for understanding these tools.

There is a large amount of empirical research demonstrating this and what exactly are negative consequences for a field if such things are left unchecked. Consequences of making such strategic mistakes in theoretical physics don't seem as dire as e.g. in engineering; such differences in the appreciation of consequences among practitioners literally leads to the very idea that therefore there may be a good case to be made that cutting certain corners can be justified for whatever reason, e.g. heuristic or aesthetic reasons as Hossenfelder points out. Every junior researcher invariably mimics both the good and bad of coworkers and more experienced researchers when learning how to do research and how to survive as a researcher in practice; before long they may have developed a strategy that they need to keep to in order to survive or even thrive in practice, independent of increasing the quality of their papers.

In the practice of theoretical physics, this would largely translate to a lack of innovation in theory production and an excess of mimicking behavior, larger than should be expected based on the makeup of both the population of research programmes and researchers. In the ideal case, higher quality would lead not merely to more funds, but to a higher wage as well; I believe the relative lowness of physics wages actually causes much of the 'cutting corners'/lack of innovation problems in physics at the microeconomic and psychological level for individuals.
I'm not going to lie - the more your line of though goes on, the more it resembles the Cultural Revolution rather than an actual scientist interested in the truth. (And I feel the need to say that I am a leftist who is not using the Cultural Revolution as a red scare tactic: I really do feel like this is an anti-intellectual attack.)
I fully understand your trepidation. I am not saying that this needs to be done per se, I am saying that there is a good case to be made that physicists should themselves start wanting to do this for all the benefits; I think researchers like Hossenfelder see this as well. The fact is that peer review systems in all professional endeavors have evolved with time, in order to adapt to the environmental changes in their field (amount of practictioners, amount of funds, amount of research programmes, breakthroughs etc).

The review system in the practice of physics, apart from the arxiv, on the other hand seems to have stagnantly remained constant, even directly challenging innovation, despite extreme changes in the landscape of actual practice: the amount of novel mathematical, statistical and computational tools and techniques available alone has already exploded to such a degree that it somewhat of a mystery that physicists, given the choice, tend to stick to relatively outdated or less potent known techniques.

It is somewhat puzzling to me that (theoretical) physics seems to be the only STEM discipline that seem to do this to such a large extent; it is almost as if the familiar techniques are already too much that their wants and needs were already oversaturated 50 years ago, and that the utility of most new techniques can not even be judged because the backlog of available techniques has become so large.

Another thing is that most physicists, myself included, will eventually start complaining that they just want to get to the physics instead of worrying about such matters; the problem is that no one else is properly equipped to do this job except for physicists. For contrast and example, in clinical medicine the exact same thing occurred; the physicians eventually realized this for themselves, and over a period of 50 years encorporated the updated peer review process into the actual practice of doing clinical medicine. Their review system is continuously monitored and guidelines updated periodically by large councils consisting of practicing and retired expert physicians.
More directly: who is it that you think should judge what constitutes good cosmology paper, if not the peer reviewers or the relevant journals? Who are these "independent groups of experts in the same field" (as though they are not refereeing journals already)?

If you could not tell from my previous paragraphs (and the context of our conversation), I'm worried that you want these fields to be judged by those who are not experts.
As I said before: a small group of independent practicing experts in the same subfield with about 15 years or more of practical experience; the best thing would be if each expert could be working in a competing research programme. In order to make it even more clear: every single practicing physicist would eventually reach expert status and therefore need to be able to do this on the fly.

It should be clear that what I am describing here does not yet exist in practice for physics. I think the best institutions who could set up such a programme are large instutions such as the APS and the biggest research journals. One thing is sure: in order to carry out such a research quality assurance management, far more physicists would need to be trained than are being trained today; I see this as a very good thing, not in the least because it creates a new class of jobs which cannot disappear.

A potential strong benefit is that physicists who become experts in doing this could be hired by institutions of other professions to review their respective research methodologies w.r.t. physics, especially in government and healthcare; the irony is that this actually happened more during the 20th century, but then stopped for a large variety of reasons. In any case, I can tell you that right now that most practitioners in most other fields know practically no physics and have never had any significant professional interaction with a physicist or mathematician; the negative consequences of this should be starkly obvious for both parties.

Last edited:
seazal
I'm not at all a fan of Hossenfelder's recent criticisms of physics, but I think it's perfectly fine for her to express her views on her blog and book. I myself became interested in string theory mainly after the Smolin and Woit books.

BTW, I found your comment on naturalness and the Wilson viewpoint interesting. Preskill, in his eulogy, does wonder whether Wilson could have steered us wrong in this case. https://quantumfrontiers.com/2013/06/18/we-are-all-wilsonians-now/

I should of course point out that if Wilson was wrong here, that does not mean Hossenfelder is right (ie. I largely agree with you that what she writes is hostile to good science).

10 years ago. We wouldn't have thought someone mainstream or popular in the physics community would argue along the line of Hossenfelder now. So it's not a stretch to imagine that 10 years from now. A popular physicist would argue that both Naturalness and Hossenfelder view are wrong, and all the standard model parameters were put by hand and it's the end of science because we couldn't bridge the gap or understand it anymore. I'd like to know if there are already emergence of such beliefs now in any physicist or if this would be proposed in the future whether it is a valid reason. The future Hossenfelder counterpart or version might argue we had to face the truth it was what was going on and no naturalness arguments can work anymore in any beyond the standard model because it is so.

Look. I'm not saying I agree with it. Just want to know if any physicists proclaiming this in the future would be a valid argument or outright disallowed in physics to think of such.

Auto-Didact
Auto-Didact
I'd like to know if there are already emergence of such beliefs now in any physicist or if this would be proposed in the future whether it is a valid reason.
Look. I'm not saying I agree with it. Just want to know if any physicists proclaiming this in the future would be a valid argument or outright disallowed in physics to think of such.
I completely agree with this. It is far better to be notified of what physicists actually honestly think and feel about such academic problems they face in the practice of physics, instead of sticking your head and in the sand and pretending that there aren't any real problems. Ideally, it is exactly physicists that should be the ones most worried about such matters and most dedicated to finding solutions, far before it reaches the point that the public chooses to get involved.

Auto-Didact
BTW, I found your comment on naturalness and the Wilson viewpoint interesting. Preskill, in his eulogy, does wonder whether Wilson could have steered us wrong in this case. https://quantumfrontiers.com/2013/06/18/we-are-all-wilsonians-now/
Very interesting, especially the displayed attitude and/or feeling of almost disgust against unnatural theories.

This seems to me clearly reminiscent of the strong psychological reactions i.e. the theoretical biases that pre-20th century mathematicians held against concepts such as non-continuity, non-smoothness and other peculiar functions labeled as 'pathological' (i.e. sick or diseased), which of course today are are all concepts universally accepted and respected by mathematicians.

Gold Member
I can not undertand why Wilson is blamed, er, sorry, acknowledged, for the invention of the effective field theory paradigm. It does not follow from the rest of the eulogy. Were he to be serious about effective theories, he had not even worried about the continuum limit and phase transitions.

Fra
As I understand Sabine's reasoning - based on her blog - the logic rests on that she holds the view that tuning the theory in its parameter space is not a physical process - thus "finetuning" is not a physics problem.

On the surface that seems something hard to argue with. But after deeper reflections I find that is an oversimplification but explaining why is not trivial.

But like Smolin hinted, what we aim at here are not just a description at any "cost" in terms of encoding capacity. What we seek is more explanatory power, given constraints of the observing system.q

This all boils down relates directly to associating evolving law as the physical process that does correspond to tuning theories. So Sabines view that theory tuning has no physical correspondence is imo likely wrong and is missing out the KEY component to a measurement theory that does not reduce observers to non physical gauges. This is hard to grasp as it raises questions on the nature and objectivity of physical laws. Smolin dedicated books on the topic and still failed to convince most. This is not merely a technical issue its hard to digest conceptually.

I think I see Sabines logic, but i do not share her premises and take on physical law and theory. Sure there is a difference between our models of reality and reality itself but its not that simple.

/Fredrik

I can not undertand why Wilson is blamed, er, sorry, acknowledged, for the invention of the effective field theory paradigm. It does not follow from the rest of the eulogy. Were he to be serious about effective theories, he had not even worried about the continuum limit and phase transitions.

The Wilsonian effective field theory viewpoint is consistent with his interest in the continuum limit and phase transitions - they form a coherent whole.

10 years ago. We wouldn't have thought someone mainstream or popular in the physics community would argue along the line of Hossenfelder now. So it's not a stretch to imagine that 10 years from now. A popular physicist would argue that both Naturalness and Hossenfelder view are wrong, and all the standard model parameters were put by hand and it's the end of science because we couldn't bridge the gap or understand it anymore. I'd like to know if there are already emergence of such beliefs now in any physicist or if this would be proposed in the future whether it is a valid reason. The future Hossenfelder counterpart or version might argue we had to face the truth it was what was going on and no naturalness arguments can work anymore in any beyond the standard model because it is so.

Look. I'm not saying I agree with it. Just want to know if any physicists proclaiming this in the future would be a valid argument or outright disallowed in physics to think of such.

The main thing is that the standard model is widely believed not to be the final theory. Maybe by amazing luck it is. If you look at the example in post #4 by @suprised you will see the analogy of a forest in which all the trees are the same height. That is not weird if trees were ultimate fundamental particles, but we know they are not. And even given our knowledge that trees are not ultimate fundamental particles, we don't know of any law preventing a forest in which all trees are the same height, so one could criticize trying to find an explanation for it.

seazal
The main thing is that the standard model is widely believed not to be the final theory. Maybe by amazing luck it is. If you look at the example in post #4 by @suprised you will see the analogy of a forest in which all the trees are the same height. That is not weird if trees were ultimate fundamental particles, but we know they are not. And even given our knowledge that trees are not ultimate fundamental particles, we don't know of any law preventing a forest in which all trees are the same height, so one could criticize trying to find an explanation for it.

If all standard model parameters were put by hand or caused by dynamics so complex that humans will never completely understand them, do you still referred to them as "naturalness" in this version of unreacheable beyond standard model? If yes, then its not compatible with Hossenfelder theme which we can only reserve to describe purely random process where there is nothing beyond the standard model? This subtle distinction is very important for generations to come so please clarify this vital issue.

Gold Member
The end of science in the sense of trying to find an explanation. This is about the scientific method. Essentially she claims that an observed fine tuning does not need to have an explanation, any observed numerical quantity is a good as any other one, so stop here, cancel experiments, it is all meaningless.
Although her assertion that fine tuning need not have an explanation may be correct, doing science means providing a plausible theory which explains why that should be so. It is not sufficient to just assert that and move on. Furthermore, it's very easy to simply sit back and criticize the lack of progress in high energy physics. It's much harder to provide an alternative that addresses the outstanding questions in a convincing way. On the other hand, the way science has been done in the last 500 years or so has led to an incredible advancement in understanding nature and the even more incredible progress made over the last hundered years has left what remains as very difficult questions. There is no principle that requires new discoveries to continue at the pace leading up to the standard model.

Klystron
seazal
Although her assertion that fine tuning need not have an explanation may be correct, doing science means providing a plausible theory which explains why that should be so. It is not sufficient to just assert that and move on. Furthermore, it's very easy to simply sit back and criticize the lack of progress in high energy physics. It's much harder to provide an alternative that addresses the outstanding questions in a convincing way. On the other hand, the way science has been done in the last 500 years or so has led to an incredible advancement in understanding nature and the even more incredible progress made over the last hundered years has left what remains as very difficult questions. There is no principle that requires new discoveries to continue at the pace leading up to the standard model.

A hundred years from now, when we are all gone. Is there a possibility we may get back to the age of superstitions and faith, etc. that science has been trying to extinguished for many centuries? By then, the future LHCs would still detect nothing and we still won't have a theory supporting naturalness. And people won't just take Hossenfelder explanation that fine tuning need not have an explanation because no statistical precedence. In fact, her proposal may be the dawn or beginning that can take us to the age of superstitions and faith. This is possible, is it not?

Fra
I see no reason to make this mystic in any way. I think the rationality against finetuning is what is mentioned in post 3 - stability. It has absolutely nothing to do with that the reals in the neighbourhood of 1 is a priori more probable than 10^24 per see. There IS however a logic to that LARGE measures (requiring many bits to encode) consume more computational and memory resources, and thus have an evolutionary disadvantage. Simple models, that can be phrase in terms of small numbers thus are more competitive. This is not mystic at all, it makes perfect sense to me.

A theory seen as an explanatory model, that by its mathematics, require extreme finetuning in order to comply with observation suggests that, given limited resolution and chaos, simply is not stable, and thus not very viable - and thus not natural, as we know that evolved systems in nature no matter how incredibly complex are robust! Now if you take this into the world of thinking about evolution of law, manifested as tendencies for certain actions impleneted in the physical structure of material systems, then total system stability suggests that we expect nature to correspond or be isomporphic to a system of effective theories, where effective relates to the host observer where they are encoded. This is just like, you need to "scale down" or "dumb down" complex algorithms, in order to be real time efficient on smaller computers, but at the cost of lower say confidence levels.

Sabine seems to think that this is not physics though, or that these things are not isomorphic to physical processes. Here there is a disagreement depending on how we understand or envision how nature actually "implements" and maintains things that "obey" apparent laws.

/Fredrik

seazal
I see no reason to make this mystic in any way. I think the rationality against finetuning is what is mentioned in post 3 - stability. It has absolutely nothing to do with that the reals in the neighbourhood of 1 is a priori more probable than 10^24 per see. There IS however a logic to that LARGE measures (requiring many bits to encode) consume more computational and memory resources, and thus have an evolutionary disadvantage. Simple models, that can be phrase in terms of small numbers thus are more competitive. This is not mystic at all, it makes perfect sense to me.

A theory seen as an explanatory model, that by its mathematics, require extreme finetuning in order to comply with observation suggests that, given limited resolution and chaos, simply is not stable, and thus not very viable - and thus not natural, as we know that evolved systems in nature no matter how incredibly complex are robust! Now if you take this into the world of thinking about evolution of law, manifested as tendencies for certain actions impleneted in the physical structure of material systems, then total system stability suggests that we expect nature to correspond or be isomporphic to a system of effective theories, where effective relates to the host observer where they are encoded. This is just like, you need to "scale down" or "dumb down" complex algorithms, in order to be real time efficient on smaller computers, but at the cost of lower say confidence levels.

Sabine seems to think that this is not physics though, or that these things are not isomorphic to physical processes. Here there is a disagreement depending on how we understand or envision how nature actually "implements" and maintains things that "obey" apparent laws.

/Fredrik

We are now at the safest period in history where we either have Hossenfelder and alike to convince us to accept it just like that without seeking further explanations or others who are in wild goose chase with wrong theory.

In 50 years time if things remain that way, then either it should be like that for centuries to come for the safety of the public.

https://en.wikipedia.org/wiki/Naturalness_(physics)
"The concern is that it is not yet clear whether these seemingly exact values we currently recognize, have arisen by chance (based upon the anthropic principle or similar) or whether they arise from a more advanced theory not yet developed, in which these turn out to be expected and well-explained, because of other factors not yet part of particle physics models.".

These other factors could be what give rise to the Big Bang and Universe and life as we know it. We share with it the supernal life, hence the supernal powers. On earth, the primitive beings couldn't even be entrusted with nuclear knowledge. What more one related to the power of the universe (It must hence be entrusted to only a few who can take a bow of secrecy). Therefore if these other factors can make humans create some fearsome weapons. Then I agree it must be suppressed. Humanity is now at the safest period in history because it is very easy to suppress it. By simply letting them take the present course.

Let humans prove that they can be entrusted with more knowledge without setting afire and destruction to every path and land (or planet) they colonize or conquer, then all will be revealed.

Meantime, let's enjoy more debate between Hossenfelders and the Naturalists explaining null result after null result. Do you believe in Hossenfelder belief that "it may very well be that the LHC will remain the largest particle collider in human history" (her quote).

http://backreaction.blogspot.com/2018/12/how-lhc-may-spell-end-of-particle.html

"How the LHC may spell the end of particle physics"

Auto-Didact
Do you believe in Hossenfelder belief that "it may very well be that the LHC will remain the largest particle collider in human history" (her quote).
End of Western particle accelerators, perhaps. There simply are more important matters to the public at the moment than particle physics, e.g. gravitational wave astrophysics, network science and nuclear fusion, just to name a few; each of these are far more exciting and promising than a new multi billion dollar Null Results Collider.

In any case, seeing how things are currently developing, China is poised to take over the mantle of exploration from the Western world, perhaps even by storm within the coming decade: a lunar base, new colliders and a new generation of immense space based detectors and stations. If they have the money burn, let them invest; if you're a particle physicist, find another job or just move there.

seazal
End of Western particle accelerators, perhaps. There simply are more important matters to the public at the moment than particle physics, e.g. gravitational wave astrophysics, network science and nuclear fusion, just to name a few; each of these are far more exciting and promising than a new multi billion dollar Null Results Collider.

In any case, seeing how things are currently developing, China is poised to take over the mantle of exploration from the Western world, perhaps even by storm within the coming decade: a lunar base, new colliders and a new generation of immense space based detectors and stations. If they have the money burn, let them invest; if you're a particle physicist, find another job or just move there.

You are right about China is planning to develop the Circular Electron Positron Collider, a 100 TEV collider.

http://english.cas.cn/newsroom/news/201810/t20181023_199981.shtml

This is in spite of Yang Mills warning about it in 2016:

https://www.sciencemag.org/news/2016/09/debate-signals-cloudy-outlook-chinese-supercollider

China motive for building it may not just be to study the HIggs in more details but because they have seen extra data not known to the rest of the world.

After China built it. Would they accept non-Chinese physicists? Would the machine use chinese language only in the computers or labels? How could non-Chinese speaking foreign physicists work with them?

If they would totally control it. Then if they discovered something fantastic that would have global implications that can shift the balance of powers. They could suppress it and use it to conquer the world (this is not a far stretch of imagination, remember the Japanese and German planned to conquer the world in WWII)

So we need to let Western and European physicists see the data in not so far future so they can have the will to create equally powerful machines and take leadership in the free world.

The most tragic thing that can happen on Earth is select Elite group taking control of the knowledge to take control of the rest of the world.

So more than ever. We need to work even harder and resist Hossenfelder even more by producing signs of naturalness back in physics. The next 30 years will be critical. It will also be a time when next generations of physicists would be schooled and there may be much fewer if western/european physics continues at the current pace due to students taking banking or computer courses instead of physics where there is much less return or promises. Even now, is it not many theoretical physicists are transferring to banking or other more promising industries?

So yes the next 30 years will be critical. It will spell the difference whether humanity perishes in the near future (500 years from now from wars, environmental collapse, Elite taking control of the masses, etc.) or we become part of the intergalactic society (when we learn not to leave a trail of destructions wherever we go).

I'm a chinese in the 40s. If I return to school and take theoretical physics. Could they employ a 60 year old physicist at the Circular Electron Positron Collider 20 years from now or in 2040?

Auto-Didact
You are right about China is planning to develop the Circular Electron Positron Collider, a 100 TEV collider.

http://english.cas.cn/newsroom/news/201810/t20181023_199981.shtml

This is in spite of Yang Mills warning about it in 2016:

https://www.sciencemag.org/news/2016/09/debate-signals-cloudy-outlook-chinese-supercollider
Yang is right in my opinion; there currently really is no strong scientific case to be made to make it, i.e. no glaring prediction to falsify.

The immediate focus in physics should for the time being be on investing in new theoretical programmes and simultaneously exploring more promising experimental routes outside of collider physics. When the West is in a better financial position again, perhaps then building a new collider will become a good idea again.
China motive for building it may not just be to study the HIggs in more details but because they have seen extra data not known to the rest of the world.

...
Like what? The worries in the rest of the post seems like pure speculation to me. Do you live in China?
I'm a chinese in the 40s. If I return to school and take theoretical physics. Could they employ a 60 year old physicist at the Circular Electron Positron Collider 20 years from now or in 2040?
Perhaps, if you are able to make a career by then which is sufficient to meet their standards and/or you are willing to work for free/cheap.

seazal
Yang is right in my opinion; there currently really is no strong scientific case to be made to make it, i.e. no glaring prediction to falsify.

The immediate focus in physics should for the time being be on investing in new theoretical programmes and simultaneously exploring more promising experimental routes outside of collider physics. When the West is in a better financial position again, perhaps then building a new collider will become a good idea again.

Like what? The worries in the rest of the post seems like pure speculation to me. Do you live in China?
Perhaps, if you are able to make a career by then which is sufficient to meet their standards and/or you are willing to work for free/cheap.

My father lived in China and I go back and forth. There are now 1.4 Billion populations in China. Only 300M populations in the US. I'm very familiar with Chinese culture and I know that the Chinese Academy of Science is in possessions of rare/gifted individuals that shows them there is more to physics that meets the eye. And they want to have full understanding for national security purposes. This is why they will want to built the Circular Electron Positron Collider. Of course, no one here would believe me and I hate speculations. Just treat this as hypothesis when in the future you guys are scratching your head why China would want to spend so much to built it. The motivations is not just to study the Higgs in more details but for something far more important. I won't talk about this again as I don't want to hijack this thread and I know this forum is not for any speculations. But just remember or considered what I said based on me being a Chinese and hence privy to China culture and knowledge as well as access to the scientific community and their physicists.

Auto-Didact
The motivations is not just to study the Higgs in more details but for something far more important.
Something of this nature, but then perhaps laced with anti-matter?

seazal
Something of this nature, but then perhaps laced with anti-matter?

There is high probability naturalness still works. Is it not string theory is really naturalness all the way.

But any idea how string theory can proceed without any supersymmetry (null at all levels)? Are you a string theorist, an LQGist, or a Hossenfendist?

We may be dealing with very very complex problem. I wonder if the Chinese are clever enough to figure it out or it will be out of reach for a few thousand years.

If they would only detect the Higgs and nothing more than it with 100 TeV. Would they ever learn the secret of dark matter or cosmological constant by mastering the knowledge and knowing all properties of the Higgs? Maybe they really want to know if the vacuum was fine tuned to metastability or unstable. If they succeed building the 100 TeV collider, 99.5% of the physicists working there including Dr. Yang would still be influenced by western thoughts and concepts. Is it not.

Last edited:
Mentor
Moving the thread to to General Discussion, as it has become speculative. This means the source requirement is not as stringent.

Auto-Didact
To get back on topic: I agree with Hossenfelder that an unnatural theory need not be a problem and that we shouldn't spend too much time focussing on it; there is a term for this: numerology. However I believe she makes an error when she then says that having naturalness as a criterium is unscientific and therefore wrong.

I will explain why: basically, what Hossenfelder has done is conduct an observational study among physicists using qualitative research methodology - a way of doing science I'm not sure she is that well acquainted with - and then offered her conclusion; given her lack of experience in doing this kind of research, it is still somewhat impressive.

First, she identified a problem: the stagnation of success in the practice of theoretical physics, i.e. no new successful theories since the SM. Over a period of many years, she identified a certain recurring behaviour among physicists ("looking for explanations for numerical parameters for questionable reasons"); observationally, she identified that this behaviour was often accompanied by the term "naturalness".

She then empirically, through questioning, interviews, discussions and literature review, identified what the term means in this context and identified this as a particular view, an aesthetic even ("the view that theories should have no large numerical discrepancies between parameters") occurring within the physics community.

All of the above is nice scientific work, probably worthy of a graduate level social science thesis. Incidentally, this is also the point where most social scientists - being of course empirically minded, not theoretically minded - would stop and just couch their tentative conclusions in a very traditional non-provocative discussion; Sabine however is not your typical social scientist but in fact a theoretical physicist, so naturally, she takes things much further.

She goes on to operationalize the aesthetic that she identified as a term: "naturalness". Based on this operationalization, she then extrapolates that most or all contemporary uses of the term in the context of physics refer to the same aesthetic; this is where things get murky, because this premise is questionable. Moreover, it even seems to be an error, however the error is anything but obvious - it certainly wouldn't be to a scientist without any serious experience in doing qualitative research, i.e. probably >90% of physicists.

To me however - a specialist in social science as well by necessity - it seems obvious that most uses of the term 'natural(ness)' does not necessarily refer to her specific operationalization of the identified aesthetic occurring among physicists; the phraseology of something 'being natural' is a widely used vague common language phrase, of which the meaning is nevertheless often either clear from context or can be clarified by further questioning.

Mathematicians as well as practitioners of mathematics, across all different historical and professional subdivisions, love to use the phrase: 'being natural' to them usually means something quite different from Sabine's very specific operationalization; I can only summarize (or roughly operationalize) the typical meaning as the following: "any procedure or action or property of some thing which can clearly or intuitively be recognized by seasoned experts to belong to some particular mathematical (sub)domain".

Put more bluntly, the term 'natural' being a generally used term literally has multiple meanings which cannot all be assumed to always refer to the aesthetic that Sabine identified without further inspection; even as a technical definition it is not unique because practitioners of mathematics at almost all levels also tend to use the term to refer to blatantly different things without any reservation; it goes without saying that this includes many physicists as well.

One would have to clarify in each case what is meant by the term 'natural' when it is used, before blindly labelling the user as being driven by the identified aesthetic. If it turns out that the typical usage of the term among physicists significantly disagrees with the aesthetic operationalization, then the rest of her argument i.e. her conclusion that adherence to naturalness is a problem for science somewhat falls apart, because then interpreting the term natural to refer to adherence to the aesthetic would simply be an unfounded extrapolation from a subsample of physicists to physicists in general.

Last edited:
Auto-Didact
Peter Woit has a new blogpost where he evaluates the situation of a new collider over at CERN. He thinks the situation is bleak, stating among other things:
Peter Woit said:
Faced with a difficult choice like this, there’s a temptation to want to avoid it, to believe that surely new technology will provide some more attractive alternative. In this case though, one is running up against basic physical limits. For circular electron-positron machines, synchrotron radiation losses go as the fourth power of the energy, whereas for linear machines one has to put a lot of power in since one is accelerating then dumping the beam, not storing it. For proton-proton machines, CM energy is limited by the strength of the dipole magnets one can build at a reasonable cost and operate reliably in a challenging environment. Sure, someday we may have appropriate cheap 60T magnets and a 100 TeV pp collider could be built at reasonable cost in the LHC tunnel. We might also have plasma wakefield technology that could accelerate beams of electrons and positrons to multi-TeV energies over a reasonable distance, with a reasonable luminosity. At this point though, I’m willing to bet that in both cases we’re talking about 22nd century technology unlikely to happen to fall into the 21st century. Similar comments apply to prospects for a muon collider.

...

Where I think Hossenfelder is right is that too many particle physicists of all kinds went along with the hype campaign for bad theory in order to get people excited about the LHC. Going on about extra dimensions and black holes at the LHC was damaging to the understanding of what this science is really about, and completely unnecessary since there was plenty of real science to generate excitement. The discussion of post-LHC experimental projects should avoid the temptation to enter again into hype-driven nonsense. On the other hand, the discussion of what to defund because of the LHC results should stick to defunding bad theory, not the experiments that refute it.
The discussion in the comments is quite lively, with points from both sides, quite well worth a read through.