Garth said:
In an eprint Kumar & Lohiya
Nucleosynthesis in slowly evolving Cosmologies show that in the FC model the BBN continues for much longer, and they claim that consequently to get the right amount of helium the baryon density becomes 28% closure density (with coulomb screening)
1) The Melia papers are example of a "good" nutty paper. He does some non-trivial calculations and comes up with some interesting things to think about. This Kumar and Lohiya paper is an example of a *marginal* nutty paper. They basically just say "wouldn't it be nice if coloumb screening" fixed the problem without any sort of calculation that shows that the mechanism is even plausible. The statement that there is no good theory of coloumb screening is false (ask the solid state people), and the schematic arguments also fail.
Their mechanism won't work because before the plasma cools, you have a mix of electron+positron+excess electron and so the charge of the electron-position cloud is going to be the same before and after the electrons recombine. Also electron screening is very well studied in Earth based fusion experiments. The problem with electron screening is that electrons are light so the wave nature of the electron means that you don't get very much screening. You might do better with muons.
One issue here is that we are not in "string theory land" where you can make things up. If you think that there is a lot of coloumb screening, you can ask someone to set up a fusion reactor or blow up a hydrogen bomb and see. The fact that I'm not getting my electricity from a fusion power plant suggests that there isn't anything that could dramatically increase reaction rates.
In other words they suggest that the FC model resolves the identity of DM, it is actually dark baryonic matter, which then leaves open my question of where is it all today - IMBH's? Intergalactic cold HI?
And... you lose.
You get into all of the structure formation evidence that the dark matter can't be baryons. If we had lots of baryons then you ought to see *huge* inhomogenities which you don't.
Which gets you to another problem with slow growth models. Mella claims to have solved the horizon problem. The trouble is that he solves it too well. The universe is very smooth but we do see lumps, and if the universe was always causally connected, it would be a lot smoother than we see.
One way of thinking about it is that the big bang is like a "cosmic clarinet". A clarinet works because you have a reed that produces random vibrations. These vibrations then gets trapped in a tube which sets up standing waves that amplify those vibrations at specific frequencies. The big bang works the same way. You have inflation which produces the initial static. At that point the vibrations get trapped in a tube. What happens with the universe is that there is a limit to which vibrations can affect each other. If the universe is five minutes old, then bits of space that are more than five light minutes apart can't interact. This "cosmic horizon" creates a barrier that enhances some frequencies and not others.
So the universe works like a clarinet and produces a specific "sound". You can then figure out lots of stuff from the "sound of the big bang". If you grow the universe slowly then the "cosmic horizon" is much further way, and I doubt you'd get much in the way acoustic oscillations.
Yes as you later say one problem is that such a procrastinating BBN would destroy all the deuterium, which would mean primordial D would have to be made by another process such as spallation, possibly on the shock fronts of the hyper-novae of POPIII stars.
Been there, done that, doesn't work.
One of the arguments against the big bang is that if you put in the density of the universe into nucleosynthesis you get to little deuterium. In the 1970's, people tried to fix that problem by looking for mechanism to make deuterium. After about a decade of trying, the conclusion is that it can't be done. You can make lithium and beryllium, but not deuterium. Deuterium is too fragile, and anything that is energetic enough to make deuterium is energetic enough to destroy it.
On the other hand the FC model resolves the Lithium problem in standard BBN
Having too **little** lithium isn't a huge problem. You can easily imagine lots of things that could burn lithium and you can also question the accuracy of the stellar measurements.
Having too *much* deuterium is a big problem. It's easy to burn light elements. It's hard to generate them. Also you can easily argue that measurements of early stars are just wrong. You can't do that with deuterium because you measure the amount on Earth (i.e. put some water through a mass spectrometer). If it's too much now, than in the early universe, the problem gets worse.
That is just what Gehlaut, Kumar and Lohiya claim in their eprint
A Concordant “Freely Coasting” Cosmology
Which is neither surprising or interesting. If you keep temperatures high, you burning lithium. But it's easy to come up with a non-cosmological mechanism to burn lithium or to argue experimental error for "under-abundance"
Also that paper goes through a lot of calculations to come up with an alternative mechanism of structure formation. However, the one thing that is missing is a graph. They come up with equations, I'd like to see them try to fit the equations with the data.
Actually that is not so, H0 has been determined without resorting to the LCDM model
Correct. H_o is an observational parameter that is model independent.
And the age of the universe is determined by
T = \frac{1}{H_0} \int_0^1 \frac{da}{\sqrt{ \Omega_{k, 0} + \displaystyle \frac{\Omega_{m, 0} }{a} +\displaystyle \frac{\Omega_{r,0} }{a^2}+ \Omega_{\Lambda,0} a^2 }}.
The coincidence therefore means the integral is unity, within observational error, with no mention of LCDM. The cosmological parameters determined in the LCDM model coincidentally result in the integral having a value of 1, in and only in the present epoch, but in the FC model they do so necessarily because the EOS is ω = -1/3.
Except that in the standard cosmology, the integral "magically" becomes one because we've calculated the various omega's and by some cosmic coincidence that happens to be one. If you toss out the calculations of the omegas, then there is no "magic". The omegas are bogus and so is the integral, and you have nothing to explain.
given that these Indian eprints have been made, with their remarkable claims about a solution of coincidence and other problems with the standard model, my question is why hasn't a refutation of their claims been similarly published?
Because a refutation is not interesting or publishable. If it takes you ten minutes to come up with a fatal flaw to a paper, then it's not worth your time to write a rebuttal. If I can see the problems in the paper in a few minutes, and everyone else can see the problems in the paper, then what's the point of wasting time writing a formal paper? You only write a rebuttal paper if it takes you two weeks of thinking to figure out what's wrong with it.
Also, the Indian eprints do not claim a solution. They are more like excuses (and pretty weak excuses) to explain what everyone knows are flaws. No smoking gun. And there are some obvious issues that make their arguments weak. For example in their structure formation paper, they come up with a bunch of greek equations. Now it would be trivial to come up with a "best fit" graph where take their model, and then plot it against the observational data. If it's even close, then *that* would be interesting. They don't, and reading between the lines, one tends to assume that they having plotted the data because they can't come up with a set of parameters that match WMAP. Maybe that's not the case, but they are the ones that are writing the paper (and if I were a peer reviewer, I'd ask them to try to do a best fit to known data and comment on it).
The other thing is that we are in "adversarial boxing mode" and not "teaching mode." If I had a student write a research paper about slow growth cosmologies, and then they talk about deuterium spallation, then I'd mention to them that they should include some references to the work in the 1970's that concluded it can't be done. If you have a paper that talks about slow growth cosmologies, the rules are different. If someone seems to be unaware that deuterium spallation won't work, that messes with credibility, and that makes it more likely the the paper will be trashed.
One thing about presenting an unconventional idea is that you have to have enough stuff so that people will at least argue against it. MOND and f(R) gravity have gotten to this point. Slow cosmologies haven't.
I get the feeling that, as with my own work, non-standard ideas are simply given the 'silent treatment'.
The trouble with non-standard ideas is that you have to argue why *your* non-standard idea is worth people's time. There are hundreds of non-standard ideas. Why is *this* non-standard idea worth looking at. MOND and f(R) gravity have gotten to this point.
And yes, people do give the silent treatment, because people have limited amounts of time, and you have to show that your idea is worth arguing about. In the case of r=ct, this won't be a problem. We'll have very good data on cosmic acceleration in the next two to three years, and if it starts looking like the universe is expanding at a constant rate, then the flood gates will open. If you think about ways of fixing the nucleosynthesis now, and it turns out that the data shows that the universe *is* not coasting, then you've just wasted a year or two that you could be doing other things with.
The Linearly Expanding model keeps making a come back, first proposed by Milne, re-suggested by the Dirac's LNH, resurrected by Kolb as an alternative to Inflation, worked on by the Indian team and now Melia has independently discovered it by using the Weyl Hypothesis!
Right. And the question is whether people are "rediscovering" things because there is new data, or because people just forgot or don't know about the reasons why it was "undiscovered" before. There are reasons why inflation won. Also you'd think that the nucleosynthesis calculations that the Indian group are trying to do were done in the 1970's, and the theoretical arguments that require R=ct were thrashed out in the 1940's.
One thing that I've noticed is that the people that are most skeptical about LCDM tend to be galaxies cluster people. They have good reasons, because when you get to cluster scales LCDM really doesn't work very well. The trouble is that it works *really* well for early universe.