CKH said:
I'm no expert but I have read that predictions about or resulting from the CMB involve some six adjustable parameters.
It is quite easy to browse the Planck archive, here the paper about extracting cosmological parameters. They show that 6 parameters, which was best for WMAP, is still best. I.e. the penalty of adding more free parameters is more severe than the success of a better fit.
Now, this is for inflationary LCDM, i.e. it includes inflationary predictions. What I was referencing was that without the HBB, the CMB can't be predicted. Or reversely, you can conclude from the CMB that were was a HBB.
CKH said:
It's hard to see where this "no free parameters" claim comes from. I assume you are talking about the E polarization, since B measurements are already in conflict.
Yes, that is my understanding of that description of the CMB.
CKH said:
Every time new data is available, the parameters are tweaked to match. Can this truly be claimed as prediction?
I'm repeating myself to you from another thread, but it can be said again.
What Drakkith says is correct, the best theory is the one supported. And I gave a reference to why there are no more contenders to predict CMB today, those are all failed theories. (There are theories that include a HBB that are contenders, albeit not so well quantified, to inflation. But that is discussed on another thread.)
But one can take a more basic approach.
Physics isn't math, because there is physical dimensions of reality and hence uncertainty involved. This shows up in that you can't use mathematical "proof", mutually agreed on procedures, in physics as axiomatic procedure can't cope with quantization (say). Instead we have to use testing to mutually agreed on quality standards (3 and 5 sigma, say).
That takes us to measurement theory, which is what I had to study under basic physics at the university. It tells us that everything empirical can be described by hypothesis testing, whether observations (a hypothesis on observed value and its uncertainty), hypotheses (a hypothesis on a mechanism) or theories (sets of interrelated hypotheses; a hypothesis on a process).
Most poignant there is that when we test an observation or a mechanism, we also fix free parameters in range and test the constraints of the experiment. No "assumptions", no "adjustments". The result is that we have to look at robustness and tension with other experiments. If an observation or a theory survives repeated testing, it is robust. Sure, the constraints vary a little between WMAP and Planck, there is some tension at 1-2 sigma on some parameters, but the result is robust.*
Another test of robustness and ease of tension is if a theory is self-consistent, so it is more likely to survive internal and external challenges. LCDM is such, the first self-consistent cosmology.
Further test of robustness is usefulness. Is the theory surviving long and productively? LCDM is such, it has launched 2 space experiments (WMAP that made its case, Planck that strengthened it) and many land based ones, and many theory variants and useful cosmological methods (BAO as distance rulers, weak lensing to probe structures, ...).
That measurement theory goes beyond usefulness to what may one day approach "proof", mutually agreed on procedure, was clinched by LHC finding a standard Higgs:
"
The Laws Underlying The Physics of Everyday Life Are Completely Understood
... A hundred years ago it would have been easy to ask a basic question to which physics couldn’t provide a satisfying answer. “What keeps this table from collapsing?” “Why are there different elements?” “What kind of signal travels from the brain to your muscles?” But now we understand all that stuff. (Again, not the detailed way in which everything plays out, but the underlying principles.) Fifty years ago we more or less had it figured out, depending on how picky you want to be about the nuclear forces.
But there’s no question that the human goal of figuring out the basic rules by which the easily observable world works was one that was achieved once and for all in the twentieth century.
You might question the “once and for all” part of that formulation, but it’s solid. Of course revolutions can always happen, but there’s every reason to believe that our current understanding is complete within the everyday realm. ..."
[
http://blogs.discovermagazine.com/c...s-of-everyday-life-are-completely-understood/ ; my bold.]
The process of competition under testing works generally, for everyday physics as well as the exotic physics of LCDM, eventually there are no more possible contenders.**
[So maybe some day this will be accepted as "proof", in the sense of mutually agreed on procedures moving beyond reasonable doubt, in the same way as "testing", in the sense of mutually agreed on quality levels moving beyond reasonable doubt. But that is a philosophical issue, and I don't want to do those.]
* So what happens if the fixed parameter ranges fail, and we have to change them? Why, we have an old, dead theory and new, alive theory of course! The old parameter ranges can't be resurrected, unless there is something wrong with the observations or the treatment of the theory when testing. The "theory" under test is the constrained one, not the whole set of theories that the unconstrained free parameter range describe, which can still be viable with some other parameter choice.
Then again, if we test and fail many times, if we add parameters under penalty, eventually the area must be abandoned as unproductive.
**
Why that works is an open issue, which goes beyond the simple observation.