A Hubble tension -- any resolution?

  • A
  • Thread starter Thread starter Mordred
  • Start date Start date
  • Tags Tags
    Hubble Tension
Mordred
Science Advisor
Messages
2,243
Reaction score
169
TL;DR Summary
Hubble tension any resolution ?
A few years back there was an issue called the Hubble tension where the observations were nor matching up to predictions when looking at the Early and late time data.

I had heard that one possible explanation is due to our being in an underdense region but have also read a counter paper to this.
Has there been any resolution or is the issue still being resolved ?
 
Space news on Phys.org
Well, I guess that depends on what you mean by "a few".

Maybe 30 years ago, the Hubble tension was whether it was around 50 km/s/Mpc or 100. Today the tension is whether it is around 68 or 74.
 
Mordred said:
Has there been any resolution or is the issue still being resolved ?
Still unresolved, it seems. The latest voice on this was from the DESI collaboration:
1717350033954.png

Fig. 9 from data release VI ('Cosmological constraints...').
 
Vanadium 50 said:
Maybe 30 years ago, the Hubble tension was whether it was around 50 km/s/Mpc or 100. Today the tension is whether it is around 68 or 74.
But the source of the "tension" now is not quite the same. Maybe 30 years ago, the "tension" was just due to relatively large error bars on the various measurements that go into estimating the Hubble constant, so the range of possible estimated values was pretty wide. Now the "tension" is that there are two different calculations based on two different sets of measurements and they give different answers, and at least according to some, the error bars around each calculation are narrow enough that the difference between the two answers is significant.
 
  • Like
Likes pines-demon, nnunn, DaveE and 1 other person
Bandersnatch said:
Still unresolved, it seems. The latest voice on this was from the DESI collaboration:
View attachment 346355
Fig. 9 from data release VI ('Cosmological constraints...').
Thanks Bandersnatch for that reference. Guess it's a problem that's going to take some time to pin down. Not that I expected an immediate resolution given the nature of the problem and the difficulties that go with precise measurements at different cosmological scales

Just to avoid confusion this is the tension I'm referring to

Tensions between the Early and the Late Universe​

https://arxiv.org/abs/1907.10625
 
Last edited:
Mordred said:
this is the tension I'm referring to
This is the tension I described in the latter part of post #4.
 
PeterDonis said:
Maybe 30 years ago, the "tension" was just due to relatively large error bars on the various measurements that go into estimating the Hubble constant,
Just the opposite. The error bars were too small.

I'm told that it is totally impossible for that to be the case today. Of course, U was also told the exact same thing back then.
 
Vanadium 50 said:
The error bars were too small.
I thought the error bars back then were large enough that the range of estimates was 50 to 100.
 
yes I caught that reference in post 4. It brought me to thinking I should include a reference for any other readers who may not be familiar with the tension.

I often keep wondering if the tension may have something to do with how the evolution of matter and radiation vary due to expansion via the following relations as a function of redshift

$$ H_z=H_o\sqrt{\Omega_m(1+z)^3+\Omega_{rad}(1+z)^4+\Omega_{\Lambda}} $$

another conjecture I thought of trying is to look into the Saha equations for any phase transitions that may apply but those are just conjectures on my part. ( I would be surprised if these haven't been looked into where applicable )Though I am currently researching electroweak and nucleosynthesis processes I doubt it will affect what I'm working on. So its more for curiosity sake on any new findings in regards to the tension
 
  • #10
Vanadium 50 said:
Just the opposite. The error bars were too small.
Can you provide a reference? What different measurements were in tension back then?
 
  • #11
I'll look things up when I gey a moment (which will nit be today and probabkt not tomorrow). Looking up decades old papers is not as easy as looking up years olds paper.
 
  • #12
I do recall some contention with the first Planck dataset with its low value of (if I recall correctly 67) when previous datasets were roughly 73 ) .

It seems to me Planck always had a lower value than other datasets such as from COBE and WMAP as two examples. Thinking about this last night made me realize there is one question I don't know how to answer.

What happens to expansion rates as a result of recombination when the mean average density of protons/neutrons and electrons combine to form atoms ?

Yes they would all have the same equation of state (matter P=0) . However the density change should effect expansion rates. I dont recall ever reading any literature describing this effect on expansion rates.
 
  • #13
Mordred said:
However the density change should effect expansion rates.
Why would matter density change on recombination?
 
  • #14
PeterDonis said:
But the source of the "tension" now is not quite the same. Maybe 30 years ago, the "tension" was just due to relatively large error bars on the various measurements that go into estimating the Hubble constant, so the range of possible estimated values was pretty wide. Now the "tension" is that there are two different calculations based on two different sets of measurements and they give different answers, and at least according to some, the error bars around each calculation are narrow enough that the difference between the two answers is significant.

Yes, it is called "tension" when the error bars narrow to the point where they no longer overlap. What was happening 24 years ago was not a "tension" because the error bars were too wide: from 64 to 80 km/s/Mpc.

In fact, the Hubble tension began with the first Planck measurements in 2013, which gave Ho=67.3±1.2 km/s/Mpc, while cosmic distance ladder measurements gave 74.5±3.0 km/s/Mpc, as can be seen in Figure 6 of https://arxiv.org/pdf/2308.02474
Captura de pantalla 2024-06-03 091308.png

Regarding the OP's question, it seems that in a recent conference, Wendy Freedman showed signs of a possible solution, as described by Dr. Becky here.

Of course, as Dr. Becky rightly points out, we need to wait for the publication of the paper to draw conclusions.
 
Last edited:
  • Like
Likes Astronuc and nnunn
  • #15
Mordred said:
I often keep wondering if the tension may have something to do with how the evolution of matter and radiation vary due to expansion via the following relations as a function of redshift

$$ H_z=H_o\sqrt{\Omega_m(1+z)^3+\Omega_{rad}(1+z)^4+\Omega_{\Lambda}} $$
Well, that's what Adam Riess has been hinting at for at least 5 years: that there is something wrong with the Standard Model and therefore new physics is required to solve the Hubble tension.
 
  • #16
Bandersnatch said:
Why would matter density change on recombination?
That's the conjecture I'm not positive that recombination would or wouldn't alter the overall matter density. Matter doesn't have a momentum term to exert pressure (not that pressure is particularly applicable ) it may be that the reason I've never come across any particular literature involving a change in matter density is that no change in the density occurs due to recombination. However I don't know that for sure. You could very well be correct that no change occurs in so far as the critical to actual density relations via the critical density formula.
 
  • #17
Thanks for that link will look at it after work Jaime
 
  • #18
Mordred said:
no change in the density occurs due to recombination
No change in total stress-energy density occurs, but there is a change in the matter density, because the binding energy of recombination, which before was contained in the matter (nuclei and electrons), gets released as radiation. So some of the total stress-energy density gets changed from matter density to radiation density.

Bandersnatch said:
Why would matter density change on recombination?
See above.
 
  • #19
Thanks PeterDonis that makes sense. Lol though now I want to look into the applicable calculations lol. Thankfully I already have the calculations for hydrogen deuterium and lithium via the Saha equations or rather the applicable graphs
 
  • #20
PeterDonis said:
because the binding energy of recombination
Yeah, I thought that might be the idea. But while technically true, that's gotta be negligible for any purposes concerning evolution of the universe. We're not even talking nuclear binding energies here.
Which, imo, would be the likely reason Mordred's never saw it included anywhere.
 
  • #21
Mordred said:
What happens to expansion rates as a result of recombination when the mean average density of protons/neutrons and electrons combine to form atoms ?
Given that the CMB is the result of recombination, I don't think this can be related to the Hubble tension.
 
  • #22
Bandersnatch said:
while technically true, that's gotta be negligible for any purposes concerning evolution of the universe.
Yes, the binding energy as a fraction of the total is very small--about one hundred millionth if we assume hydrogen-1 (about 10 eV binding energy vs. about 1 GeV total energy per hydrogen atom).
 
  • #23
Thanks then I don't need to consider it for my project regarding nucleosynthesis. Though that's one portion of the project ( the tail end so to speak). Regardless that is a bit off topic as mentioned it should have no significant effect on expansion rates my thanks on that.
 
  • #24
Vanadium 50 said:
I'm told that it is totally impossible for that to be the case today. Of course, U was also told the exact same thing back then.
The statistical errors in the data, and the identified systemic measurement errors, are probably too small, even if slightly understated, to resolve the problem.

The late time Hubble constant estimates have moderately large errors amenable to slight adjustment, but the Planck CMB based calculation has such a large data set and involves such precise observations, that at least within the model within which the calculation is done, the errors are tiny. And, the tension is too big to resolve with errors in the late time Hubble constant measurements alone. It also makes sense to doubt that the late time measurements are really the problem, because, while they have larger error bars, these measurements are also robust, with many different methods of measuring the Hubble constant in the late time era largely confirming each other.

Many preprints by independent authors so far this year have all reached that conclusion. See, e.g., Kumar (2024), Gialamas (2024), Colgáin (2024), Roy (2024), Pogosian (2024), He (2024), Calderon (2024), Signorini (2024), and Akarsu (2024). Valentino and Blunt have even written a whole book on the Hubble tension, largely concurring with this conclusion.

Stacy McGaugh speculates that the problem may be "a systematic in the interpretation of the CMB data rather than in the local distance scale."

In other words, the tension might be a modeling error in the CMB based calculation of the Hubble constant, since that is a model dependent calculation and there are other cracks in the LambdaCDM model of cosmology that was used to make that calculation (such as the appearance of galaxies observed by the JWST sooner than the model). See also Liu (2024) making a similar analysis. (Note that in the sense being discussed, the argument being made is not that "modeling error" means that the LambdaCDM model was inaccurately translated into math and data analysis by Planck's scientists; instead, the concern is that the LambdaCDM model is not the correct cosmology model to use because it misstates the astrophysical reality.)

This makes sense, because inaccuracies in the model used to make the calculation won't show up in the Planck measurement error bars, and because this is pretty much a singular measurement approach in tension with the diverse measurement approaches used in the late time era. If a different model produced a higher value of the Hubble constant from the CMB, the tension would go away and it might fix other tensions in the cosmology fits too. And, this is far from the only tension that LambdaCDM has with observational evidence, so something about that model needs to be fixed in any case.

LambdaCDM may be a good first order approximation of our universe's cosmology with a small number of parameters. But as our data gets better on multiple fronts, it may not be good enough to fit all the data.

Gialamas (2024) on the other hand, argues basically that the late time measurement may be a local effect. This paper argues that the part of the galaxy where we are measuring the late time Hubble constant may just be randomly non-representative of the late time universe as a whole. In other words, everyone is accurately measuring what they see, but they are failing to take into account a basically random sampling error which causes our little corner of the universe to be weird. This random sampling error may be much bigger than one might naively expect, because these random variations are correlated with each other due to their common cosmological origins in the early universe.

Resolving the source of the still unresolved tension isn't easy, and there are a variety of proposals out there to gather new kinds of data to figure out why there is a tension.
 
Last edited:
  • #25
Thanks for the additional references. I'm currently studying each separately. A quick glance I noted several of them suggest an evolving Lambda term which has its own implications in gathering data to help identify the cause of the Lambda term. That in and of itself I consider useful as the cosmological constant has been problematic in terms of the standard model. Some of those links you provided may also be helpful in another line of research I've been keeping track of in terms of The Higgs field as the cosmological constant. One of the problems I noted and have never resolved is that any papers involved typically use the same equations of state terms for the scalar field as has been used for chaotic inflation via the inflaton. That alone I found questionable to say the least as it seems to involve too great an assumption. Which is another line of active research I have been focused on for several years so it will useful to look at the comparisons. Particularly since one of the other issues I found I was struggling with in the Higgs case is how can that lead to a constant density evolution term suggested by the w=-1 relation.

Obviously at this point I'm not forming any opinions of which possibilities is best but rather choose to study each possibility equally. So once again those references you provided I will find useful for various related reasons that in some cases go beyond just the Hubble tension itself. So once again my thanks
 
Last edited:
  • Like
Likes nnunn and ohwilleke
  • #27
Jaime Rudas said:
Slides from Freedman's presentation can be viewed here.
It should be noted that Freedman's results have not yet been published.

1718740264539.png
 
  • #28
Some interesting details in what's shown above. I look forward to the paper itself. There is a particular metalicity relation in the above that definitely peaked my interest. Will have to see what details I can pull on that lol.

I'm glad to see that Freedman suggested that no new physics is required due to JWST measurements.
Thanks for the additional detail.
 
  • #29
Jaime Rudas said:
Regarding the Hubble tension, at the Lemaître Conference 2024 taking place at the Vatican Observatory, contributions from Wendy Freedman and Adam Riess were presented today. Slides from Freedman's presentation can be viewed here.
The bottom line from Freedman's presentation is that:

Screenshot 2024-06-18 at 5.06.15 PM.png

And basically, the presentation convincingly shows that this late time calculation of the Hubble constant measurement is solid and should have a small uncertainty.

The cosmic microwave background measurement mostly from Planck data, in contrast, according to this published paper from 2019 is:
Recent estimates of the Hubble constant from CMB anisotropies by Planck Collaboration, [are] (H0 = 66.93 ± 0.62 km s−1 Mpc−1, Planck Collaboration Int. XLVI 2016)[.]

This is a 1.4 sigma tension before considering Cephid distances (which would be called "consistent with each other), but could rise to as much as 3.4 sigma at the high end of the range after considering them, even though, even at the highest tension it is only a 6% discrepancy in relative terms.
 
  • #30
Some further details on the Leavitt Law in regards to Cephied metalicity mentioned above in the Freedman article.
There's a considerable number of papers in regards to calibration using the Milky way and local group. Some examples below

https://ui.adsabs.harvard.edu/abs/2023MNRAS.520.4154M/abstract

https://arxiv.org/abs/2205.06280

The last link above relates this to Hubble constant. In essence forming the first rung of the Cosmological distance ladder.

https://arxiv.org/abs/2103.10894

As this research is related thought I would add to this thread.
 
  • #31
Mordred said:
I'm glad to see that Freedman suggested that no new physics is required due to JWST measurements.
Thanks for the additional detail.
Yes, she had already suggested this in the lecture she gave in April at the American Physical Society, as announced by Dr. Becky. The novelty here is that we now have the presentation slides.
 
  • #32
Well I for one have always hated the "new physics required " trend you see in a large number of different studies where some contention shows up. Seems to be a very common declaration often used particularly but not restricted to pop media.
Typically what I've seen in the last few decades of various studies where some contention shows up the problem often gets resolved via some calibration fine tuning or other related systematic error margin etc.
Our models are extremely successful and robust with huge supportive bodies of evidence that are extremely interconnected among numerous related physics theories. Knowing this I typically approach these findings with the frame of mind that new physics isn't usually the answer

However that's just me
 
  • #33
Turns out this article is more related than I realized in regards to the Hubble constant in terms of cepheid metalicity.
https://arxiv.org/abs/2103.10894

One interesting detail is that it describes a contention on which Cepheid is brighter ( metal rich or metal poor). I plan to follow up on this aspect as it bears looking into. Thought others may find it interesting.
 
  • #34
PS: what a great set of slides from Wendy!

ohwilleke said:
inaccuracies in the model used to make the calculation won't show up in the Planck measurement error bars

ohwilleke said:
LambdaCDM may be a good first order approximation of our universe's cosmology with a small number of parameters. But as our data gets better on multiple fronts, it may not be good enough to fit all the data.

ohwilleke said:
In other words, everyone is accurately measuring what they see, but they are failing to take into account a basically random sampling error which causes our little corner of the universe to be weird.

As ohwilleke points out (just above), part of this tension may be related to the fact that calibration of distance ladders is done within the local supercluster (Laniakea). Any unaccounted-for bulk flows, non-homogeneities, or non-isotropies beyond this back yard would affect the validity of such a local yardstick.

Mordred said:
Some of those links you provided may also be helpful in another line of research I've been keeping track of in terms of The Higgs field as the cosmological constant. One of the problems I noted and have never resolved is that any papers involved typically use the same equations of state terms for the scalar field as has been used for chaotic inflation via the inflaton. That alone I found questionable to say the least as it seems to involve too great an assumption.
Mordred, regarding your Higgs-related research, would a non-constant distribution of that condensate of weak hypercharge be worth exploring? Global symmetry-breaking, leading to global homogeneity, while simplifying a standard model for particle physicists, might be another of those "too great assumptions"?
 
  • #35
nnunn said:
part of this tension may be related to the fact that calibration of distance ladders is done within the local supercluster (Laniakea). Any unaccounted-for bulk flows, non-homogeneities, or non-isotropies beyond this back yard would affect the validity of such a local yardstick.
Internally, galaxy clusters are neither homogeneous nor isotropic, so I don't understand how these inhomogeneities and anisotropy can affect the calibration of distance ladder.
 
  • Like
Likes nnunn and ohwilleke
  • #36
Mordred said:
Well I for one have always hated the "new physics required " trend you see in a large number of different studies where some contention shows up. Seems to be a very common declaration often used particularly but not restricted to pop media.
Mordred said:
Typically what I've seen in the last few decades of various studies where some contention shows up the problem often gets resolved via some calibration fine tuning or other related systematic error margin etc.
Mordred said:
Our models are extremely successful and robust with huge supportive bodies of evidence that are extremely interconnected among numerous related physics theories. Knowing this I typically approach these findings with the frame of mind that new physics isn't usually the answer

However that's just me
I hear you, and agree with you strongly in the area of high energy physics and in describing highly relativistic systems like black holes and white dwarf-black hole binary systems.

In the case of cosmology, while there is something to be said for not leaping off to new physics without trying to give existing models a try, it is also true that the LambdaCDM model is not nearly as complete as the Standard Model of Particle Physics, for example.

The CDM part of the LambdaCDM model is basically a placeholder with a quite general description of dark matter's properties, but a lot of the specifics not worked out.

Also, while it would be one thing if the Hubble tension was the only issue with the LambdaCDM model, and indeed, it is still, barely, possible that the Hubble tension could be resolved with improved measurements, the LambdaCDM model has lots and lots and lots of independently measured tensions with astronomy observations, i.e. dozens of them (something that has been explored in some detail and length in other PF threads).

The unmodified LambdaCDM model remains the paradigm mostly because it is easy to work with and has few parameters (just six in the most basic version), and because no consensus has emerged around any one alternative to it.
 
  • Like
Likes nnunn and Mordred
  • #37
Mordred, regarding your Higgs-related research, would a non-constant distribution of that condensate of weak hypercharge be worth exploring? Global symmetry-breaking, leading to global homogeneity, while simplifying a standard model for particle physicists, might be another of those "too great assumptions"?

The electroweak processes involving Higgs would likely be either washed out via inflationary processes. However one may get signatures in the CMB. So the electroweak process itself involving Higgs wouldn't be the cause of the Hubble contention.

However and this is specifically if the cosmological constant itself involves the Higgs field as being the cause of the Lambda then as the universe expands you should see a reduction in the energy density term for Lambda. As per every other SM particle. So far no evidence of a varying Lambda term has been conclusive enough.
Where this becomes important is the effective equation of state using the scalar field modelling.

https://en.m.wikipedia.org/wiki/Equation_of_state_(cosmology)

See last equation if the last equation has a term other than w=-1 then the cosmological constant will vary. However all research and measurement have strong agreement on w=-1 so far.
 
Last edited:
  • #38
ohwilleke said:
In the case of cosmology, while there is something to be said for not leaping off to new physics without trying to give existing models a try, it is also true that the LambdaCDM model is not nearly as complete as the Standard Model of Particle Physics, for example.

The CDM part of the LambdaCDM model is basically a placeholder with a quite general description of dark matter's properties, but a lot of the specifics not worked out.

Also, while it would be one thing if the Hubble tension was the only issue with the LambdaCDM model, and indeed, it is still, barely, possible that the Hubble tension could be resolved with improved measurements, the LambdaCDM model has lots and lots and lots of independently measured tensions with astronomy observations, i.e. dozens of them (something that has been explored in some detail and length in other PF threads).

The unmodified LambdaCDM model remains the paradigm mostly because it is easy to work with and has few parameters (just six in the most basic version), and because no consensus has emerged around any one alternative to it.

Agreed on everything you have above. We share the same way of thinking on this.
 
  • #39
Hi Jaime, I wrote:
nnunn said:
part of this tension may be related to the fact that calibration of distance ladders is done within the local supercluster (Laniakea). Any unaccounted-for bulk flows, non-homogeneities, or non-isotropies beyond this back yard would affect the validity of such a local yardstick.
You replied:
Jaime Rudas said:
Internally, galaxy clusters are neither homogeneous nor isotropic, so I don't understand how these inhomogeneities and anisotropy can affect the calibration of distance ladder.

Indeed! But by "back yard" and "yardstick", I was thinking of Laniakea.

So the "bulk flows, non-homogeneities, or non-isotropies" I had in mind would be on a larger scale, that is to say, external to our local supercluster. Sorry for not being clear.
 
  • #40
ohwilleke said:
The unmodified LambdaCDM model remains the paradigm mostly because it is easy to work with and has few parameters (just six in the most basic version), and because no consensus has emerged around any one alternative to it.
In my opinion, the ΛCDM model remains the paradigm because it is, by far, the model that best fits the observations, and no alternative has emerged that even remotely comes close in this regard.
 
  • #41
nnunn said:
Indeed! But by "back yard" and "yardstick", I was thinking of Laniakea.

So the "bulk flows, non-homogeneities, or non-isotropies" I had in mind would be on a larger scale, that is to say, external to our local supercluster. Sorry for not being clear.
But the calibration of the distance ladder does not depend on the presence or absence of inhomogeneities or anisotropies at any scale.
 
  • #42
To add some clarity to Jaime Rudas post. The papers I linked in post 30 describes calibration procedures of our Milky way. Those calibration procedures are to set specific filters in regards to Leavitt Law. In essence fine tuning metalicity detection of different Cepheids with the combination of parallax distance measures using the GAIA parallax database.

In Cosmology a commonly used tool is the luminosity to distance relation. So fine tuning this relation helps fine tune further measurements where parallax becomes impractical.
 
  • Like
Likes Jaime Rudas
  • #43
Mordred said:
To add some clarity to Jaime Rudas post. The papers I linked in post 30 describes calibration procedures of our Milky way. Those calibration procedures are to set specific filters in regards to Leavitt Law. In essence fine tuning metalicity detection of different Cepheids with the combination of parallax distance measures using the GAIA parallax database.

In Cosmology a commonly used tool is the luminosity to distance relation. So fine tuning this relation helps fine tune further measurements where parallax becomes impractical.
Yes, the three most important steps of the distance ladder are parallax, Leavitt's Law and SN Ia . The calibration of these steps does not depend on inhomogeneities or anisotropies.
 
  • #44
Jaime Rudas said:
In my opinion, the ΛCDM model remains the paradigm because it is, by far, the model that best fits the observations, and no alternative has emerged that even remotely comes close in this regard.
Does ΛCDM really fit the observations well?

(I won't litter this comment with citations for all of them but can easily produce them if desired.)

1. Galaxy formation occurs much sooner than predicted.
2. The predicted value of the growth index in the ΛCDM Model, that measures the growth of large scale structure, is in strong (4.2 sigma) tension with observations, given the model's measured parameters.
3. CDM predicts fewer galaxy clusters than are observed.
4. There are too many colliding clusters and when they are colliding they are on average, colliding at too high relative velocities.
5. Void galaxies are observed to have larger mean-distances from each other at any given void size than predicted by ΛCDM.
6. Voids between galaxies are more empty than they should be, and do not contain the population of galaxies expected in ΛCDM. See also the KBC void.
7. The gravitational lensing of subhalos in galactic clusters recently observed to be much more compact and less "puffy" than CDM would predict.
8. The 21cm background predictions of the theory are strongly in conflict with the EDGES data as shown in this illustration:

1718834881471.png

9. The Hubble tension. Many analyses of the data prefer a non-constant amount of dark energy over cosmological history.
10. The σ8 -- S8 -- fσ8 tension
11. Increasing evidence that the universe is not homogeneous and isotropic.
12. ΛCDM provides no insight into the "cosmic coincidence" problem.
13. CDM gets the halo mass function (i.e. aggregate statistical distribution of galaxy types and shapes) wrong.
14. Doesn't adequately explain galaxies with no apparent dark matter and has no means of predicting where they will be found.
15. KIDS evidence of less clumpy structure than predicted.
16. CDM should predict NFW shaped dark matter halos almost universally, but observations show that NFW shaped dark matter halos are rare.
17. CDM predicts cuspy central area of dark matter halos which are not observed. Physically motivated feedback models have failed to explain this observation.
18. CDM predicts more satellite galaxies.
19. CDM fails to predict that satellite galaxies strongly tend to be in the plane of the galaxy.
20. The observed satellite galaxies of satellite dwarf galaxies are one hundred times brighter than ΛCDM simulations suggest that they should be
21. Flat rotation curves in spiral galaxies are observed to extend to about a million parsecs, but CDM predicts that they should fall off much sooner (at most in the tens or hundred of thousands of parsecs).
1718835136213.png

22. CDM fails to explain by the baryonic Tully-Fischer relationship hold so tightly over so many orders of magnitude or why there is a similar tight scaling law with a different slope in galaxy clusters.
23. Well known scaling laws among the structural properties of the dark and the luminous matter in disc systems are too complex to be arisen by two inert components that just share the same gravitational field as CDM proposes.
24. CDM failed to predict in advance that low surface brightness galaxies appear to be dark matter dominated.
25. CDM erroneously predicted X-ray emissions in low surface brightness galaxies that are not observed.
26. CDM fails to predict the relationship between DM proportion in a galaxy and galaxy shape in elliptical galaxies.
27. CDM doesn't predict the relationship between bulge mass and number of satellite galaxies.
28. CDM predicts that too few metal poor globular clusters are formed.
29. CDM does not explain why globular clusters which are predicted and observed to have little dark matter shown non-Keplerian dynamics.
30. We do not observe in galaxy systems the Chandrasekhar dynamical friction we would expect to see if CDM was as proposed.
31. CDM greatly underestimate the proportion of disk galaxies that have very thin disks.
32. CDM doesn't explain why thick spiral galaxies have more inferred dark matter than thin ones.
33. CDM doesn't predict the absence of inferred dark matter effects in gravitationally bound systems that are within moderately strong gravitational fields of a larger gravitationally bound system.
34. Compact objects (e.g. neutron stars) should show equation of state impacts of dark matter absorbed by them, at rates predicted from estimated dark matter density, that are not observed.
35. ΛCDM extended minimally to include neutrinos is over constrained in light of the latest DESI data with a best fit to negative neutrino mass, or at least a sum of neutrino masses far lower than the minimum neutrino masses inferred from neutrino oscillation.
36. CDM does not itself predict the observation from lensing data that dark matter phenomena appear to be wave-like.
37. Extensive and varied searches for dark matter particle candidates have ruled out a huge swath of the parameter space for these particles while finding no affirmative evidence of any such particles. These searches include direct dark matter detection experiments, micro-lensing, LHC searches, searches for sterile neutrinos, searches for axion interactions, comparisons of the inferred mean velocity of dark matter due to the amount of observed structure in galaxies with thermal freeze out scenarios, searches for signatures of dark matter annihilation, etc.
38. CDM is not the only theory that can accurately produce the observed cosmic background radiation pattern (e.g., at least three gravity based approaches to dark matter phenomena have done so).
39. The angular momentum problem: In CDM, during galaxy formation, the baryons sink to the centers of their dark matter halos. A persistent idea is that they spin up as they do so (like a figure skater pulling her arms in), ultimately establishing a rotationally supported equilibrium in which the galaxy disk is around ten or twenty times smaller than the dark matter halo that birthed it, depending on the initial spin of the halo. This simple picture has never really worked. In CDM simulations, in which baryonic and dark matter particles interact, there is a net transfer of angular momentum from the baryonic disk to the dark halo that results in simulated disks being much too small.
40. The missing baryons challenge: The cosmic fraction of baryons – the ratio of normal matter to dark matter – is well known (16 ± 1%). One might reasonably expect individual CDM halos to be in in possession of this universal baryon fraction: the sum of the stars and gas in a galaxy should be 16% of the total, mostly dark mass. However, most objects fall well short of this mark, with the only exception being the most massive clusters of galaxies. So where are all the baryons?
41. CDM halos tend to over-stabilize low surface density disks against the formation of bars and spirals. You need a lot of dark matter to explain the rotation curve, but not too much, in order to allow for spiral structure. This tension has not been successfully reconciled.
42. Many of the possible CDM particle scenarios disturb the well established evidence of Big Bang Nucleosynthesis.

This list isn't comprehensive, but it is more complete than most (compare, e.g., the list from Wikipedia).

Three and a half dozen strong tensions or outright conflicts between ΛCDM predictions and observations doesn't sound like a great fit to observations to me.

In fairness, the original ΛCDM model formulated in the late 1990s wasn't intended to be perfect. It was a first approximation that was focused mostly on cosmology observations and was not unduly concerned with galaxy and cluster scale phenomena. The scientists who devised it in the first place knew perfectly well that they were ignoring factors (like neutrinos) that were present and had some effect, but were negligible relative to the precision of astronomy observations available at the time (which were much more crude than recent observations - e.g., they didn't have the JWST or the Hubble Telescope or DESI or 21cm measurements or gravitational wave detectors or decent neutrino telescopes). It wasn't supposed to be the final be all and end all theory and it has had a good run and probably lasted longer as the paradigm than originally expected.

And, again, competing paradigms are like duels to be the head of the tribe. Until you have a particular competitor that is clearly superior enough to displace the leader of the pack, it stays in the lead by default, even if its flaws are myriad. I'm not necessarily saying that the competitor has arrived.

But, I'm also saying that a paradigm that has so many conflicts with observations that it is vulnerable and has reduced credibility. So, it shouldn't be taken as seriously as something like the Standard Model of Particle Physics which has only a handful of recent and relatively minor tensions the currently remain unresolved after half a century of rigorous efforts to poke holes in it.

To circle around to the original question of the Hubble tension, all of these discrepancies, even if they merely require tweaks to the ΛCDM model rather than a wholesale abandonment of it, add credence to the possibility that the ΛCDM model details used to predict the early time Hubble constant from the Cosmic Microwaved Background radiation with high precision, could cause the early time Hubble constant value to be underestimated. This is so despite the assumption that there have been correct CMB measurements by Planck and that those measurements were then correctly inserted into the status quo ΛCDM model. This could easily have resulted in a 2-6% too low early time Hubble constant determination from the CMB measurements.
 
Last edited:
  • #45
Meh. Them's the kind of issues with the model as with the 'Earth is round' one. Both fail once you get down to the nitty-gritty, and want to know where all the kinks and ridges and bulges come from. But it's pretty clearly a generally-correct background on which to build, and whatever model eventually supersedes LCDM will have to resemble it at less granular scales.
 
  • Skeptical
  • Like
Likes ohwilleke and Jaime Rudas
  • #46
ohwilleke said:
Does ΛCDM really fit the observations well?
Well, I didn't say that it fits the observations well, but that it is the model that best fits the observations. Do you know of any model that fits the observations better than ΛCDM?
 
  • #47
ohwilleke said:
Does ΛCDM really fit the observations well?
Fairly substantial list, one detail often missed is that LCDM evolves as new research findings become conclusive its rather adaptive in that regard. For example prior to WMAP you had a rather large list of possible Universe geometries that had viability. Now any complex geometries such as variations of the Klein Bottle are constrained.
Consider this example

Encyclopaedia Inflationaris​


https://arxiv.org/abs/1303.3787

this is a comprehensive list of different inflation scenarios. If one were to look over this list could anyone state "this inflation theory is LCDM ?" or would one consider them all to be possible options under LCDM ?

Take for example this line from the above introduction.

"namely the slow-roll single field models with minimal kinetic terms. "

this descriptive is something that got described as being the best favored fit from the first Planck dataset.

Another decent example being the following

"The Cosmic Energy Inventory"
https://arxiv.org/pdf/astro-ph/0406095v2

if one were to look through many of the values given in this article, one would find that many of those values were replaced with better estimates.

As LCDM is adaptive its likely to stick around for quite some time albiet it will continually adapt and improve as research findings become available.
With regards to the Hubble contention I would be extremely surprised if LCDM could not adapt to the new findings whatever they may be.
As someone who has actively watched cosmology research develop over the past 35 years or so it's often amazed me as to how adaptive LCDM is.
 
Last edited:
  • #48
Mordred said:
For example prior to WMAP you had a rather large list of possible Universe geometries that had viability. Now any complex geometries such as variations of the Klein Bottle are constrained.
As I understand it, the WMAP observations don't constrain the existence of a complex topology for the universe, but rather its size. That is, if, for example, the universe has a flat 3-torus topology, its size would have to be several times that of the observable universe.
 
Last edited:
  • #49
It did apply limited constraints as you described that were later further constrained in the first Planck dataset. Lol that was quite a few years ago so my memory of the events could very well be a little sketchy on all the details from the WMAP results. Though I do recall all the space.com forum (back when it existed) arguments that were debating complex geometries under WMAP.
 
  • #50
Mordred said:
It did apply limited constraints as you described that were later further constrained in the first Planck dataset.
What kind of constraints are you referring to?
 
Back
Top