Naturalness: dimensionless ratios

  • #1
717
495

Main Question or Discussion Point

The concept of naturalness as dimensionless ratios of parameters of order unity has recently come under criticism, most obviously because Sabine Hossenfelder wrote a book (Lost In Math) criticizing it.

Very recently however, Peter Shor and Lee Smolin had a discussion about this over at Peter Woit's blog, in which Smolin gave an explanation on 'why the order of unity' is important for the practice of physics, going back to Fermi and Feynman:
Peter Shor said:
@Lee:

When you say that “any pure dimensionless constants in the parameters of a physical theory that are not order unity require explanation,” you are implicitly putting a probability distribution on the positive reals which is sharply peaked at unity.

Doesn’t this assumption also require explanation? Why should the range of numbers between 1 and 2 be any more probable than the range between 10^10 and 10^20? Aren’t there just as many numbers in the range between 10^10 and 10^20? as there are between 1 and 2? (Uncountably many in each.)
Lee Smolin said:
Dear Peter Shor,

Yes, exactly, and let me explain where that expectation for dimensionless ratios to be order unity comes from.

Part of the craft of a physicist is that a good test of whether you understand a physical phenomena-say a scattering experiment-is whether you can devise a rough model that, with a combination of dimensional analysis and order of magnitude reasoning, gets you an estimate to within a few orders of magnitude of the measured experimental value. People like Fermi and Feynman were masters at this, a skill that was widely praised and admired.

The presumption (rewarded in many, many cases) was that the difference between such rough estimates and the exact values (which were by definition dimensionless ratios) were expressed as integrals over angles and solid angles, coming from the geometry of the experiment, and these always gave you factors like 1/2pi or 4pi^2, which were order unity.

Conversely, if your best rough estimate does not get you within a few orders of magnitude of the measured value, then you don’t understand something basic about your experiment.

Seen from the viewpoint of this craft, if your best estimate for a quantity like the energy density of the vacuum is 120 orders of magnitude larger than the measured value, the lesson is that we don’t understand something very basic about physics.

Thanks,

Lee
From Smolin's historical explanation, i.e. that naturalness is essentially a heuristic tool based on dimensional analysis making it a good strategy for quickly solving Fermi problems, I would say that adherence to naturalness is a pretty strong criteria for 'doing good physics'.
 
  • Like
Likes Andrea Panza, Charles Link, arivero and 2 others

Answers and Replies

  • #2
king vitamin
Science Advisor
Gold Member
478
229
Some science writers, like Sabine Hossenfelder, seem to argue that naturalness isn't an argument at all and one shouldn't worry about its absence. Most confusingly for me, even John Baez had a comment on one of these blogs about how naturalness does not make predictions. This seems totally counter to the modern Wilsonian effective field theory approach to particle physics. If one assumes some cutoff on the order of [itex]10^{15}[/itex] GeV and considers the leading marginal/irrelevant operators, one finds an explanation of the global symmetries present in the Standard Model, and one can even predict neutrino masses in the eV range (predicted by Weinberg by this precise line of thought in 1979). Indeed, all of the old pre-Wilsonian arguments for leaving out nonrenormalizable interactions are now instead arguments about naturalness, and we see that it has an incredibly impressive predictive power.

But it does seem like naturalness breaks down for a number of observables, and this requires a satisfactory explanation (and I do not find Hossenfelder's dismissal of naturalness satisfactory). Here's a recent discussion about this crisis which I did enjoy: https://arxiv.org/pdf/1710.07663.pdf
 
  • Like
Likes atyy and Auto-Didact
  • #4
415
12
Hossenfelders crusade against particle physics is getting more ridiculous every day. In essence she promotes the end of science. It is like as if a remote forrest was discovered where all the trees have the same height up to one nanometer. If people say, this is unnatural, there must be some explanation for it, her answer would be: I can’t define probability here so any answer must be meangingless, people who try to find an explanation are misguided and I demand they should stop to work on this asap.

Fortunately no serious scientist pays attention to this, no matter how shrill her voice becomes.
 
  • Like
Likes atyy and weirdoguy
  • #5
arivero
Gold Member
3,304
59
Funny example, suprised, as such coincidence should be called "natural" in our context. On the other hand, we have always the example of the excessively natural coupling of the top quark, which almost nobody worries about.
 
  • #6
717
495
In essence she promotes the end of science.
The end of particle physics is hardly the end of science.
It is like as if a remote forrest was discovered where all the trees have the same height up to one nanometer. If people say, this is unnatural, there must be some explanation for it, her answer would be: I can’t define probability here so any answer must be meangingless, people who try to find an explanation are misguided and I demand they should stop to work on this asap.
I can see what you're trying to say, but this seems like a gratuitous strawman to me. Naturalness can be a good tool, but need not be; as Hossenfelder points out there certainly seems to be a problem with making some arguments from naturalness in particle physics.

As Smolin makes clear, this means that something very basic about physics is not understood i.e. the mathematics underlying particle physics is only valid to some certain extent and it's extrapolated accuracy becomes very questionable beyond that domain.
 
  • #7
415
12
The end of particle physics is hardly the end of science.
The end of science in the sense of trying to find an explanation. This is about the scientific method. Essentially she claims that an observed fine tuning does not need to have an explanation, any observed numerical quantity is a good as any other one, so stop here, cancel experiments, it is all meaningless.
Following this expert advice leads to nowhere.

Another example: why are the pion masses so much smaller than the natural scale of QCD? Shouldnt be there an explanation? Hossenfelder’s advice would be, there is nothing wrong, why waste time on an ill-posed problem? End of science.

But of there is an explanation of this. Namely there is a broken symmetry behind and the mesons are (pseudo-) Goldstone bosons. That is the science, she, and anyone who would listen her, would miss.

Further example: almost vanising cosmological constant. Apart from all the usual arguments, here a simplified version: how come that the dynamics of the big bang was so foresighted that (leaving all other phenomena aside) after the QCD phase transition that would dump an order of 1GeV vacuum energy, the outcome is an incredible small number? Shouldnt be there an explanation of this, no?
Or where precisely is the fault asking this question in the first place?

H would say, I tune the boundary conditions as I like, so there is no problem for explaining this or any other number. Again, end of science!

Naturalness is not a tool, but a heuristic principle, that generally works very well in all sorts of contexts. It is a well-proven guideline and therefore it was a good idea to see how far we get by applying the most naive recipes of particle theory to the problem of Higgs mass etc. These ideas were too naive with hindsight, and didnt work, so this indeed created a sense of crisis in the community. But the right attitude is to work harder and be more clever and take more precise measurements, in order to find an explantion eventually. Rather than declaring there is no problem with hierarchies and finetuning, and stop doing science.

Even worse is to propagate this attitude and make a hullahoop all over the internet and media, out from a filter bubble that has little connection to real science. This actually points to another problem, but that has nothing to do with physics: it is the phenomenon “failed physicist sees physics fail”, which is all-too-common in the blogosphere.
 
  • Like
Likes atyy, king vitamin and akvadrako
  • #8
king vitamin
Science Advisor
Gold Member
478
229
Even worse is to propagate this attitude and make a hullahoop all over the internet and media, out from a filter bubble that has little connection to real science. This actually points to another problem, but that has nothing to do with physics: it is the phenomenon “failed physicist sees physics fail”, which is all-too-common in the blogosphere.
This is my real problem with Hossenfelder. Take, for example, her recent response to Jamie Farnes' paper: http://backreaction.blogspot.com/2018/12/no-negative-masses-have-not.html. Farnes did not write a press release on his paper as soon as he posted it to arXiv; rather, Farnes waited until it had gone through peer review and was published before giving a writeup of his work for the general public.

Hossenfelder then writes a post on her blog, which has a large layman audience, tearing it apart. Is this what she considers the correct avenue for scientific discourse? Farnes did the correct thing and waited for the work to pass peer review; at the very least Hossenfelder should have written up a note to arXiv (like Brian Skinner did in response to the recent claim of high-Tc superconductivity) before going public with her issues with the paper. Instead she sows doubt among the general public without engaging in the scientific process. It's very dangerous, because we do not need popular opinion - which is more easily swayed by charisma and persuasion rather than technical arguments - to be what determines our research directions and grant funding.
 
Last edited:
  • Like
Likes romsofia, Klystron, Vanadium 50 and 1 other person
  • #9
717
495
It's very dangerous, because we do not need popular opinion - which is more easily swayed by charisma and persuasion rather than technical arguments - to be what determines our research directions and grant funding.
I agree that it is very dangerous, but I do not necessarily agree that it isn't needed. As Hossenfelder, and legions of other scientists (not merely physicists) have pointed out, there is today a problem with the self-corrective mechanism of science to put it mildly. Other areas of professional human endeavors (such as law, medicine, government and business) have faced similar problems.

Scientists are no exception: scientists display group behavior with all its ugly consequences. The problem with physicists (especially theoreticians) in contrast to practitioners in all other professional endeavors, is that the actual validity of their work usually has no direct consequences for others or themselves. This can cause the physicist, especially once he starts to realize he can get away with it, to adopt a somewhat care-free, cavalier attitude w.r.t. his own work and even the work of his 'friends'; this leads to a large many physicists who know how to game the academic system.

From those other fields, there is a surefire way to combat this kind of behavior: timed or at random inspections (think quality assurance, internal affairs, etc) actively challenging workers, by publically reviewing their work and having them openly defend it: if they cannot justify their work adequately, they will get penalized accordingly. Usually this is done by independent groups of experts in the same field in order to make sure that their judgement is valid.

If every practitioner (or just the large majority) now comes to believe that in their day-to-day work they are constantly at risk of having this happen to their work (Panopticon), the idea is that the practitioners will begin to self-regulate their behavior by acting more careful in their work and more responsible by notifying colleagues who might be at risk - and therefore putting the entire department at risk.

In essence this is just an expansion of the peer-review process, albeit one far less opaque and far less susceptible to corruption; it goes without saying this expanded process isn't perfect. In any case it is an empirically proven method for implementing behavioral self-correction in groups of people. In clinical medicine for example, this is how quality assurance of care and professional expertise is maintained among physicians.
 
  • #10
Vanadium 50
Staff Emeritus
Science Advisor
Education Advisor
2019 Award
24,350
7,208
This is my real problem with Hossenfelder.
Mine is that she describes people she has scientific differences with as "liars". She has descended into Lubos-land. I simply don't take her criticism seriously any more.
 
  • Like
Likes atyy and king vitamin
  • #11
king vitamin
Science Advisor
Gold Member
478
229
The problem with physicists (especially theoreticians) in contrast to practitioners in all other professional endeavors, is that the actual validity of their work usually has no direct consequences for others or themselves.
What criterion for "validity" are you using for this? And what consequences do you propose for theoreticians breaching your rules of validity?

I have seen many theorists - some with Nobel prizes - discredited within academia for their contributions being low-caliber. As far as I can tell, the old guard is not sacrosanct.

This can cause the physicist, especially once he starts to realize he can get away with it, to adopt a somewhat care-free, cavalier attitude w.r.t. his own work and even the work of his 'friends'; this leads to a large many physicists who know how to game the academic system.
This it totally unfamiliar to my experience as a research physicist - in fact I can thing of counterexamples. Can you please give explicit example before such a damning accusation?

From those other fields, there is a surefire way to combat this kind of behavior: timed or at random inspections (think quality assurance, internal affairs, etc) actively challenging workers, by publically reviewing their work and having them openly defend it: if they cannot justify their work adequately, they will get penalized accordingly. Usually this is done by independent groups of experts in the same field in order to make sure that their judgement is valid.
I'm just confused by this entire paragraph. All work in science is peer reviewed, period. What are you talking about??? What "groups of experts in the same field" do you propose who aren't already doing all the peer reviewing??? It's the same people!!

If every practitioner (or just the large majority) now comes to believe that in their day-to-day work they are constantly at risk of having this happen to their work (Panopticon), the idea is that the practitioners will begin to self-regulate their behavior by acting more careful in their work and more responsible by notifying colleagues who might be at risk - and therefore putting the entire department at risk.

In essence this is just an expansion of the peer-review process, albeit one far less opaque and far less susceptible to corruption; it goes without saying this expanded process isn't perfect. In any case it is an empirically proven method for implementing behavioral self-correction in groups of people. In clinical medicine for example, this is how quality assurance of care and professional expertise is maintained among physicians.
I'm not going to lie - the more your line of though goes on, the more it resembles the Cultural Revolution rather than an actual scientist interested in the truth. (And I feel the need to say that I am a leftist who is not using the Cultural Revolution as a red scare tactic: I really do feel like this is an anti-intellectual attack.)

More directly: who is it that you think should judge what constitutes good cosmology paper, if not the peer reviewers or the relevant journals? Who are these "independent groups of experts in the same field" (as though they are not refereeing journals already)?

If you could not tell from my previous paragraphs (and the context of our conversation), I'm worried that you want these fields to be judged by those who are not experts.
 
  • Like
Likes romsofia
  • #12
atyy
Science Advisor
13,810
2,078
This is my real problem with Hossenfelder. Take, for example, her recent response to Jamie Farnes' paper: http://backreaction.blogspot.com/2018/12/no-negative-masses-have-not.html. Farnes did not write a press release on his paper as soon as he posted it to arXiv; rather, Farnes waited until it had gone through peer review and was published before giving a writeup of his work for the general public.

Hossenfelder then writes a post on her blog, which has a large layman audience, tearing it apart. Is this what she considers the correct avenue for scientific discourse? Farnes did the correct thing and waited for the work to pass peer review; at the very least Hossenfelder should have written up a note to arXiv (like Brian Skinner did in response to the recent claim of high-Tc superconductivity) before going public with her issues with the paper. Instead she sows doubt among the general public without engaging in the scientific process. It's very dangerous, because we do not need popular opinion - which is more easily swayed by charisma and persuasion rather than technical arguments - to be what determines our research directions and grant funding.
I'm not at all a fan of Hossenfelder's recent criticisms of physics, but I think it's perfectly fine for her to express her views on her blog and book. I myself became interested in string theory mainly after the Smolin and Woit books.

BTW, I found your comment on naturalness and the Wilson viewpoint interesting. Preskill, in his eulogy, does wonder whether Wilson could have steered us wrong in this case. https://quantumfrontiers.com/2013/06/18/we-are-all-wilsonians-now/

I should of course point out that if Wilson was wrong here, that does not mean Hossenfelder is right (ie. I largely agree with you that what she writes is hostile to good science).
 
Last edited:
  • Like
Likes Auto-Didact
  • #13
717
495
Before I continue, I need to make a short digression to make crystal clear that my suggestions in principle have nothing whatsoever to do with nonsensical SJW measures, equal social diversity rates or unproven/counterproductive anti-sexual harassment proposals. Carrying on.
What criterion for "validity" are you using for this? And what consequences do you propose for theoreticians breaching your rules of validity?
My rough operationalization of validity is w.r.t. conclusions of the researcher based on expert appraisal of the quality and originality of the researcher's chosen methodology. The experts should consist of a 'jury' of, let us say about 5 to 8, independent active practitioners in that same subfield working at least 15 years, picked at random from the pool of all experts in that subfield.

I am definitely not the one to decide what validity is; this panel of experts in literally every (sub)field of physics would need to somewhat aristocratically decide every few years what constitutes quality in new research and periodically publish these recommendations as guidelines for the subfield: I would merely suggest their independent judgements to be ordinally ranked, e.g. as 'high quality', 'average quality', 'low quality', and explicitly explained how this reflects the expert opinion within the subfield and why.

The actual establishment of what constitues quality and originality is far more difficult to establish than it seems to be, since quality can vary over time both in and between subfields and originality is even more slippery; there are already highly advanced methodologies and measures invented to measure exactly such things, e.g. topological data analysis and dynamic network analysis directly comes to mind. Moreover, if some research is of a highly interdisciplinary nature it might need to be judged both by experts in both fields as well as by a combination of experts in both fields.

To make this more concrete I will give an example: what constituted high quality research methodology in optomechanics in 2010 doesn't necessarily constitute high quality research methodology in the same subfield in 2018, nor does it necessarily constitute high quality research methodology in another subfield of physics, e.g. in high temperature superconductivity. There are even curious differences, e.g. a novel mathematically advanced methodology invented in high energy physics may literally exist under another name in other older fields such as fluid dynamics and even be regarded as pedestrian within that subfield, since their own methodologies have strongly evolved since.
I have seen many theorists - some with Nobel prizes - discredited within academia for their contributions being low-caliber. As far as I can tell, the old guard is not sacrosanct.
I agree that this is a problem, which is exactly why I believe when judging validity of conclusions one needs to take into account both quality and originality. Moreover, it might be that anyone who has a Nobel Prize would need to get judged in an altogether different manner than non-Nobel laureates.
This it totally unfamiliar to my experience as a research physicist - in fact I can thing of counterexamples. Can you please give explicit example before such a damning accusation?
I have done research in multiple fields in science (physics, neuroscience, economics, data science, medicine, psychology). In all of them I have invariably seen many researchers and practitioners - both consciously and unconsciously - take shortcuts and cut corners at times, for a variety of reasons: lack of time, frustration with co-workers, not receiving payment for a particular aspect of work, hyping work purely to get funds, work towards achieving performace indices instead of actually trying to perform high quality work, choose successful lower risk strategies of known low utility even when there are more promising strategies available but which have a higher risk meaning a smaller chance of publishing in a high impact journal and therefore choose for their career instead of for bettering science, not publishing negative findings, not speaking up against results in fear of risking their position/careers etc.

My intention is not to judge any researchers, but instead to make them aware that, despite any intentions, they are human and that they are therefore susceptible to the same biases and behavioral traits as other humans. This also means that the directly perceived consequences and appreciation of their work by not only other researchers but also in general has an effect on how they do their work.
I'm just confused by this entire paragraph. All work in science is peer reviewed, period. What are you talking about??? What "groups of experts in the same field" do you propose who aren't already doing all the peer reviewing??? It's the same people!!
There is an argument to be made that the peer review system as is a bit too opaque. This invariably leads to clique formation, i.e. counterproductive group behavior making the experts more homogeneous in thinking and behavior than is warranted or reflective of actual practice. The usually used performance indices such as citation indices cannot control for this either since they are far too simplified, focussing mostly on the productivity of individual researchers in terms of papers and short term progress of a subfield than on subfields and the longer term picture; moreover, the advanced measures required cannot be properly chosen or wielded by administrators or regulators if they do not possess the necessary mathematical background for understanding these tools.

There is a large amount of empirical research demonstrating this and what exactly are negative consequences for a field if such things are left unchecked. Consequences of making such strategic mistakes in theoretical physics don't seem as dire as e.g. in engineering; such differences in the appreciation of consequences among practitioners literally leads to the very idea that therefore there may be a good case to be made that cutting certain corners can be justified for whatever reason, e.g. heuristic or aesthetic reasons as Hossenfelder points out. Every junior researcher invariably mimics both the good and bad of coworkers and more experienced researchers when learning how to do research and how to survive as a researcher in practice; before long they may have developed a strategy that they need to keep to in order to survive or even thrive in practice, independent of increasing the quality of their papers.

In the practice of theoretical physics, this would largely translate to a lack of innovation in theory production and an excess of mimicking behavior, larger than should be expected based on the makeup of both the population of research programmes and researchers. In the ideal case, higher quality would lead not merely to more funds, but to a higher wage as well; I believe the relative lowness of physics wages actually causes much of the 'cutting corners'/lack of innovation problems in physics at the microeconomic and psychological level for individuals.
I'm not going to lie - the more your line of though goes on, the more it resembles the Cultural Revolution rather than an actual scientist interested in the truth. (And I feel the need to say that I am a leftist who is not using the Cultural Revolution as a red scare tactic: I really do feel like this is an anti-intellectual attack.)
I fully understand your trepidation. I am not saying that this needs to be done per se, I am saying that there is a good case to be made that physicists should themselves start wanting to do this for all the benefits; I think researchers like Hossenfelder see this as well. The fact is that peer review systems in all professional endeavors have evolved with time, in order to adapt to the environmental changes in their field (amount of practictioners, amount of funds, amount of research programmes, breakthroughs etc).

The review system in the practice of physics, apart from the arxiv, on the other hand seems to have stagnantly remained constant, even directly challenging innovation, despite extreme changes in the landscape of actual practice: the amount of novel mathematical, statistical and computational tools and techniques available alone has already exploded to such a degree that it somewhat of a mystery that physicists, given the choice, tend to stick to relatively outdated or less potent known techniques.

It is somewhat puzzling to me that (theoretical) physics seems to be the only STEM discipline that seem to do this to such a large extent; it is almost as if the familiar techniques are already too much that their wants and needs were already oversaturated 50 years ago, and that the utility of most new techniques can not even be judged because the backlog of available techniques has become so large.

Another thing is that most physicists, myself included, will eventually start complaining that they just want to get to the physics instead of worrying about such matters; the problem is that no one else is properly equipped to do this job except for physicists. For contrast and example, in clinical medicine the exact same thing occurred; the physicians eventually realized this for themselves, and over a period of 50 years encorporated the updated peer review process into the actual practice of doing clinical medicine. Their review system is continuously monitored and guidelines updated periodically by large councils consisting of practicing and retired expert physicians.
More directly: who is it that you think should judge what constitutes good cosmology paper, if not the peer reviewers or the relevant journals? Who are these "independent groups of experts in the same field" (as though they are not refereeing journals already)?

If you could not tell from my previous paragraphs (and the context of our conversation), I'm worried that you want these fields to be judged by those who are not experts.
As I said before: a small group of independent practicing experts in the same subfield with about 15 years or more of practical experience; the best thing would be if each expert could be working in a competing research programme. In order to make it even more clear: every single practicing physicist would eventually reach expert status and therefore need to be able to do this on the fly.

It should be clear that what I am describing here does not yet exist in practice for physics. I think the best institutions who could set up such a programme are large instutions such as the APS and the biggest research journals. One thing is sure: in order to carry out such a research quality assurance management, far more physicists would need to be trained than are being trained today; I see this as a very good thing, not in the least because it creates a new class of jobs which cannot disappear.

A potential strong benefit is that physicists who become experts in doing this could be hired by institutions of other professions to review their respective research methodologies w.r.t. physics, especially in government and healthcare; the irony is that this actually happened more during the 20th century, but then stopped for a large variety of reasons. In any case, I can tell you that right now that most practitioners in most other fields know practically no physics and have never had any significant professional interaction with a physicist or mathematician; the negative consequences of this should be starkly obvious for both parties.
 
Last edited:
  • #14
119
3
I'm not at all a fan of Hossenfelder's recent criticisms of physics, but I think it's perfectly fine for her to express her views on her blog and book. I myself became interested in string theory mainly after the Smolin and Woit books.

BTW, I found your comment on naturalness and the Wilson viewpoint interesting. Preskill, in his eulogy, does wonder whether Wilson could have steered us wrong in this case. https://quantumfrontiers.com/2013/06/18/we-are-all-wilsonians-now/

I should of course point out that if Wilson was wrong here, that does not mean Hossenfelder is right (ie. I largely agree with you that what she writes is hostile to good science).
10 years ago. We wouldn't have thought someone mainstream or popular in the physics community would argue along the line of Hossenfelder now. So it's not a stretch to imagine that 10 years from now. A popular physicist would argue that both Naturalness and Hossenfelder view are wrong, and all the standard model parameters were put by hand and it's the end of science because we couldn't bridge the gap or understand it anymore. I'd like to know if there are already emergence of such beliefs now in any physicist or if this would be proposed in the future whether it is a valid reason. The future Hossenfelder counterpart or version might argue we had to face the truth it was what was going on and no naturalness arguments can work anymore in any beyond the standard model because it is so.

Look. I'm not saying I agree with it. Just want to know if any physicists proclaiming this in the future would be a valid argument or outright disallowed in physics to think of such.
 
  • Like
Likes Auto-Didact
  • #15
717
495
I'd like to know if there are already emergence of such beliefs now in any physicist or if this would be proposed in the future whether it is a valid reason.
Look. I'm not saying I agree with it. Just want to know if any physicists proclaiming this in the future would be a valid argument or outright disallowed in physics to think of such.
I completely agree with this. It is far better to be notified of what physicists actually honestly think and feel about such academic problems they face in the practice of physics, instead of sticking your head and in the sand and pretending that there aren't any real problems. Ideally, it is exactly physicists that should be the ones most worried about such matters and most dedicated to finding solutions, far before it reaches the point that the public chooses to get involved.
 
  • #16
717
495
BTW, I found your comment on naturalness and the Wilson viewpoint interesting. Preskill, in his eulogy, does wonder whether Wilson could have steered us wrong in this case. https://quantumfrontiers.com/2013/06/18/we-are-all-wilsonians-now/
Very interesting, especially the displayed attitude and/or feeling of almost disgust against unnatural theories.

This seems to me clearly reminiscent of the strong psychological reactions i.e. the theoretical biases that pre-20th century mathematicians held against concepts such as non-continuity, non-smoothness and other peculiar functions labeled as 'pathological' (i.e. sick or diseased), which of course today are are all concepts universally accepted and respected by mathematicians.
 
  • #17
arivero
Gold Member
3,304
59
I can not undertand why Wilson is blamed, er, sorry, acknowledged, for the invention of the effective field theory paradigm. It does not follow from the rest of the eulogy. Were he to be serious about effective theories, he had not even worried about the continuum limit and phase transitions.
 
  • #18
Fra
3,097
144
As I understand Sabine's reasoning - based on her blog - the logic rests on that she holds the view that tuning the theory in its parameter space is not a physical process - thus "finetuning" is not a physics problem.

On the surface that seems something hard to argue with. But after deeper reflections I find that is an oversimplification but explaining why is not trivial.

But like Smolin hinted, what we aim at here are not just a description at any "cost" in terms of encoding capacity. What we seek is more explanatory power, given constraints of the observing system.q

This all boils down relates directly to associating evolving law as the physical process that does correspond to tuning theories. So Sabines view that theory tuning has no physical correspondence is imo likely wrong and is missing out the KEY component to a measurement theory that does not reduce observers to non physical gauges. This is hard to grasp as it raises questions on the nature and objectivity of physical laws. Smolin dedicated books on the topic and still failed to convince most. This is not merely a technical issue its hard to digest conceptually.

I think I see Sabines logic, but i do not share her premises and take on physical law and theory. Sure there is a difference between our models of reality and reality itself but its not that simple.

/Fredrik
 
  • #19
atyy
Science Advisor
13,810
2,078
I can not undertand why Wilson is blamed, er, sorry, acknowledged, for the invention of the effective field theory paradigm. It does not follow from the rest of the eulogy. Were he to be serious about effective theories, he had not even worried about the continuum limit and phase transitions.
The Wilsonian effective field theory viewpoint is consistent with his interest in the continuum limit and phase transitions - they form a coherent whole.
 
  • #20
atyy
Science Advisor
13,810
2,078
10 years ago. We wouldn't have thought someone mainstream or popular in the physics community would argue along the line of Hossenfelder now. So it's not a stretch to imagine that 10 years from now. A popular physicist would argue that both Naturalness and Hossenfelder view are wrong, and all the standard model parameters were put by hand and it's the end of science because we couldn't bridge the gap or understand it anymore. I'd like to know if there are already emergence of such beliefs now in any physicist or if this would be proposed in the future whether it is a valid reason. The future Hossenfelder counterpart or version might argue we had to face the truth it was what was going on and no naturalness arguments can work anymore in any beyond the standard model because it is so.

Look. I'm not saying I agree with it. Just want to know if any physicists proclaiming this in the future would be a valid argument or outright disallowed in physics to think of such.
The main thing is that the standard model is widely believed not to be the final theory. Maybe by amazing luck it is. If you look at the example in post #4 by @suprised you will see the analogy of a forest in which all the trees are the same height. That is not weird if trees were ultimate fundamental particles, but we know they are not. And even given our knowledge that trees are not ultimate fundamental particles, we don't know of any law preventing a forest in which all trees are the same height, so one could criticize trying to find an explanation for it.
 
  • #21
119
3
The main thing is that the standard model is widely believed not to be the final theory. Maybe by amazing luck it is. If you look at the example in post #4 by @suprised you will see the analogy of a forest in which all the trees are the same height. That is not weird if trees were ultimate fundamental particles, but we know they are not. And even given our knowledge that trees are not ultimate fundamental particles, we don't know of any law preventing a forest in which all trees are the same height, so one could criticize trying to find an explanation for it.
If all standard model parameters were put by hand or caused by dynamics so complex that humans will never completely understand them, do you still referred to them as "naturalness" in this version of unreacheable beyond standard model? If yes, then its not compatible with Hossenfelder theme which we can only reserve to describe purely random process where there is nothing beyond the standard model? This subtle distinction is very important for generations to come so please clarify this vital issue.
 
  • #22
121
64
The end of science in the sense of trying to find an explanation. This is about the scientific method. Essentially she claims that an observed fine tuning does not need to have an explanation, any observed numerical quantity is a good as any other one, so stop here, cancel experiments, it is all meaningless.
Although her assertion that fine tuning need not have an explanation may be correct, doing science means providing a plausible theory which explains why that should be so. It is not sufficient to just assert that and move on. Furthermore, it's very easy to simply sit back and criticize the lack of progress in high energy physics. It's much harder to provide an alternative that addresses the outstanding questions in a convincing way. On the other hand, the way science has been done in the last 500 years or so has led to an incredible advancement in understanding nature and the even more incredible progress made over the last hundered years has left what remains as very difficult questions. There is no principle that requires new discoveries to continue at the pace leading up to the standard model.
 
  • Like
Likes Klystron
  • #23
119
3
Although her assertion that fine tuning need not have an explanation may be correct, doing science means providing a plausible theory which explains why that should be so. It is not sufficient to just assert that and move on. Furthermore, it's very easy to simply sit back and criticize the lack of progress in high energy physics. It's much harder to provide an alternative that addresses the outstanding questions in a convincing way. On the other hand, the way science has been done in the last 500 years or so has led to an incredible advancement in understanding nature and the even more incredible progress made over the last hundered years has left what remains as very difficult questions. There is no principle that requires new discoveries to continue at the pace leading up to the standard model.
A hundred years from now, when we are all gone. Is there a possibility we may get back to the age of superstitions and faith, etc. that science has been trying to extinguished for many centuries? By then, the future LHCs would still detect nothing and we still won't have a theory supporting naturalness. And people won't just take Hossenfelder explanation that fine tuning need not have an explanation because no statistical precedence. In fact, her proposal may be the dawn or beginning that can take us to the age of superstitions and faith. This is possible, is it not?
 
  • #24
Fra
3,097
144
I see no reason to make this mystic in any way. I think the rationality against finetuning is what is mentioned in post 3 - stability. It has absolutely nothing to do with that the reals in the neighbourhood of 1 is a priori more probable than 10^24 per see. There IS however a logic to that LARGE measures (requiring many bits to encode) consume more computational and memory resources, and thus have an evolutionary disadvantage. Simple models, that can be phrase in terms of small numbers thus are more competitive. This is not mystic at all, it makes perfect sense to me.

A theory seen as an explanatory model, that by its mathematics, require extreme finetuning in order to comply with observation suggests that, given limited resolution and chaos, simply is not stable, and thus not very viable - and thus not natural, as we know that evolved systems in nature no matter how incredibly complex are robust! Now if you take this into the world of thinking about evolution of law, manifested as tendencies for certain actions impleneted in the physical structure of material systems, then total system stability suggests that we expect nature to correspond or be isomporphic to a system of effective theories, where effective relates to the host observer where they are encoded. This is just like, you need to "scale down" or "dumb down" complex algorithms, in order to be real time efficient on smaller computers, but at the cost of lower say confidence levels.

Sabine seems to think that this is not physics though, or that these things are not isomorphic to physical processes. Here there is a disagreement depending on how we understand or envision how nature actually "implements" and maintains things that "obey" apparent laws.

/Fredrik
 
  • #25
119
3
I see no reason to make this mystic in any way. I think the rationality against finetuning is what is mentioned in post 3 - stability. It has absolutely nothing to do with that the reals in the neighbourhood of 1 is a priori more probable than 10^24 per see. There IS however a logic to that LARGE measures (requiring many bits to encode) consume more computational and memory resources, and thus have an evolutionary disadvantage. Simple models, that can be phrase in terms of small numbers thus are more competitive. This is not mystic at all, it makes perfect sense to me.

A theory seen as an explanatory model, that by its mathematics, require extreme finetuning in order to comply with observation suggests that, given limited resolution and chaos, simply is not stable, and thus not very viable - and thus not natural, as we know that evolved systems in nature no matter how incredibly complex are robust! Now if you take this into the world of thinking about evolution of law, manifested as tendencies for certain actions impleneted in the physical structure of material systems, then total system stability suggests that we expect nature to correspond or be isomporphic to a system of effective theories, where effective relates to the host observer where they are encoded. This is just like, you need to "scale down" or "dumb down" complex algorithms, in order to be real time efficient on smaller computers, but at the cost of lower say confidence levels.

Sabine seems to think that this is not physics though, or that these things are not isomorphic to physical processes. Here there is a disagreement depending on how we understand or envision how nature actually "implements" and maintains things that "obey" apparent laws.

/Fredrik
We are now at the safest period in history where we either have Hossenfelder and alike to convince us to accept it just like that without seeking further explanations or others who are in wild goose chase with wrong theory.

In 50 years time if things remain that way, then either it should be like that for centuries to come for the safety of the public.

https://en.wikipedia.org/wiki/Naturalness_(physics)
"The concern is that it is not yet clear whether these seemingly exact values we currently recognize, have arisen by chance (based upon the anthropic principle or similar) or whether they arise from a more advanced theory not yet developed, in which these turn out to be expected and well-explained, because of other factors not yet part of particle physics models.".

These other factors could be what give rise to the Big Bang and Universe and life as we know it. We share with it the supernal life, hence the supernal powers. On earth, the primitive beings couldn't even be entrusted with nuclear knowledge. What more one related to the power of the universe (It must hence be entrusted to only a few who can take a bow of secrecy). Therefore if these other factors can make humans create some fearsome weapons. Then I agree it must be suppressed. Humanity is now at the safest period in history because it is very easy to suppress it. By simply letting them take the present course.

Let humans prove that they can be entrusted with more knowledge without setting afire and destruction to every path and land (or planet) they colonize or conquer, then all will be revealed.

Meantime, let's enjoy more debate between Hossenfelders and the Naturalists explaining null result after null result. Do you believe in Hossenfelder belief that "it may very well be that the LHC will remain the largest particle collider in human history" (her quote).

http://backreaction.blogspot.com/2018/12/how-lhc-may-spell-end-of-particle.html

"How the LHC may spell the end of particle physics"
 

Related Threads on Naturalness: dimensionless ratios

  • Last Post
Replies
9
Views
2K
  • Last Post
Replies
2
Views
907
Replies
23
Views
12K
  • Last Post
Replies
3
Views
1K
  • Poll
  • Last Post
Replies
9
Views
2K
  • Last Post
Replies
6
Views
3K
  • Last Post
Replies
3
Views
6K
  • Last Post
Replies
3
Views
2K
  • Last Post
Replies
6
Views
2K
  • Last Post
Replies
1
Views
1K
Top