Is Self Creation Cosmology a Viable Alternative to the Standard Model?

In summary: SCC: 0.00 GR: -5.7 x 10-12... 3. WMAP CMB anisotropies SCC: 0.00 GR: 0.00054. Primordial nucleosynthesis SCC: 0.0005GR: 1.8 x 1033
  • #1
Garth
Science Advisor
Gold Member
3,581
107
After an abortive start in the new IR Forum I am beginning a new thread on the published theory of Self Creation Cosmology.
There has already been many posts on the subject in PF and I apologise for any repetition, but having been asked to post it here in A&C I here make a clean start!
The published papers are:-
On Two Self Creation Cosmologies
http://www.kluweronline.com/oasis.htm/5092775
and here:
http://novapublishers.com/catalog/product_info.php?products_id=1869
Abstract from that most recent paper:
Self Creation Cosmology An Alternative
Gravitational Theory
Garth A Barber
June 10, 2004
Abstract
A question is raised about the premature acceptance of the standard cosmological model, the LambdaCDM’ paradigm; the non-metric, or semi-metric, theory of Self Creation Cosmology is offered as an alternative and shown to be as equally concordant as the standard model with observed cosmological constraints and local observations. In self-creation the Brans Dicke theory is modified to enable the creation of matter and energy out of the self contained gravitational and scalar fields; such creation is constrained by the local conservation of energy so that rest masses vary whereas the observed Newtonian Gravitation ’constant’ does not. As a consequence there is a conformal equivalence between self-creation and General Relativity in vacuo, which results in the predictions of the two theories being equal in the standard tests. In self-creation test particles in vacuo follow the geodesics of General Relativity. Nevertheless there are three types of experiment that are able to distinguish between the two theories. There are also other local and cosmological observations that are readily explained by self-creation, such as the anomalous sunwards acceleration of the Pioneer spacecraft and a secular spinning up of the Earth’s rotation that both ’coincidentally’ echo Hubble’s constant. Moreover, the most significant feature of self-creation is that it is as consistent with cosmological constraints in the distant supernovae data, the Cosmic Microwave Background anisotropies and primordial nucleo-synthesis, as the standard paradigm. Unlike that model, however, it does not require the addition of the undiscovered physics of Inflation, dark non-baryonic matter, or dark energy. Nevertheless it does demand an exotic equation of state, which requires the presence of false vacuum energy at a moderate density determined by the field equations. Consequently it is able to interface gravitation and quantum theories without creating a ’Lambda’ problem. In self-creation there are two frames of interpretation of observational data, which depend on whether energy or energy-momentum is to be conserved and whether photons or atoms respectively are chosen as the invariant standards of measurement. In the former frame the universe is stationary and eternal with exponentially shrinking rulers and accelerating atomic clocks, and in the latter frame the universe is freely coasting, expanding linearly from a Big Bang with rigid rulers and regular atomic clocks. A novel representation of space-time geometry is suggested. As the theory is readily falsifiable it is recommended that all three of the definitive experiments be performed at the earliest opportunity.
You may not be able to access these, however there is free access of the last two of these papers on the physics ArXiv and the published work can be recovered from there as follows:
1. gr-qc/0212111 ] The Principles of Self Creation Cosmology and its Comparison with General Relativity[/URL]
2. gr-qc/0302026 ]Experimental tests of the New Self Creation Cosmology and a heterodox prediction for Gravity Probe B[/URL]
3. gr-qc/0302088 ]The derivation of the coupling constant in the new Self Creation Cosmology[/URL]
4. astro-ph/0401136] The Self Creation challenge to the cosmological concordance model[/URL]
5. gr-qc/0405094] Self Creation Cosmology - An Alternative Gravitational Theory[/URL]

The reason why I am posting on PF at all is because I value your informed and constructive criticism. From my Profile you will read: "I am a published independent researcher in cosmology". The key word here is independent it is very difficult to obtain valued and informed criticism if you are no longer in a university department. PF is for me a "physics department coffee lounge" where ideas can be suggested and knocked down or otherwise. I value that.

Predictions of the Theory

The theory is completely equivalent to GR in vacuo, therefore all tests to date which compare the geodesics of test particles and photons with observation are concordant with both GR and SCC.

The cosmological solution requires a homogeneous density; therefore the result differs from GR.

R(t) ~ t
k =+1
A finite but conformally flat model concordant with WMAP CMB anisotropies spectrum. (Not only first peak but also lack of large angle anisotropies)

[tex]\Omega_m = 2/9[/tex] (0.22)
[tex]\Omega_ L = 1/9[/tex] (0.11) (false vacuum)
[tex]\Omega_{total} = 1/3[/tex] (0.33)

1. GPB Geodetic precession
SCC: 5.5120 arcsec/yr
GR: 6.6144 arcsec/yr

GPB gravitomagnetic frame dragging precession
SCC = GR = 0.0409 arcsec/yr

2. LIGO interferometer 8km light path deflected towards the Sun by
2 x 10-12 metres vertically.
Also a 'Space Interferometer Experiment' is suggested in my papers that will test the same effect.
Deviation from the EEP by solid objects; 10cm Aluminium block in vacuo violation of EEP at one part in 10-17, three orders of magnitude smaller than present experimental sensitivity.

3. Casimir force 'bottoming out' detectable somewhere in the Solar field between the orbits of Jupiter and Saturn.(depending on instrument sensitivity)
SCC predicts the maximum Casimir force to be a function of space-time curvature.

4. Pioneer Spacecraft anomalous Sunwards acceleration of
cH = 6.6 x 10-8 cm/sec2
5. Earth decrease in day relative ancient solar eclipses (lunar orbit) at rate
H = 6 x 10-4 secs/day/century.
NB. Last two may have been already observed.


The following is an extract from my introduction to the “Comparison of the Mainstream and the Self Creation Freely Coasting models” thread and matches my work with a largely Indian team who have worked on what they call the "Freely Coasting Model (FCM)".

Introduction to FCM

The FCM is an empirical model, proposed by a team at the University of Delhi, in which the universe expands strictly linearly with time R[t] ~ t. Its motivation was the realisation that such a model would not require inflation to explain the horizon, flatness or smoothness problems of GR as they would not exist in the first place. It was then realized that the model was surprisingly concordant with cosmological constraints without the further addition of concepts such as DM or DE that remain undiscovered in laboratory physics. There have been several papers published and PhD’s gained exploring this alternative cosmological paradigm, viz:

1. A coasting cosmology
2. Freely Coasting Cosmology
3. A Concordant “Freely Coasting” Cosmology
4. A case for nucleosynthesis in slowly evolving models
5. Nucleosynthesis in a Simmering Universe
and a PhD thesis available on the physics ArXiv:
6. GRAVITATIONAL LENSING IN STANDARD AND ALTERNATIVE COSMOLOGIES
However the shortfall of this concordant empirical theory is that
it requires a mechanism to deliver the strict linear expansion.

Independently from the Indian team’s work I have developed SCC as an alternative gravitational theory that modifies GR to include a ‘non-minimally connected scalar field’.There are seven papers and eprints that are referred to above.(There have also been 47 other author citations in peer-reviewed journals.)

Self Creation Cosmology

The SCC scalar field follows that in the theory of Brans Dicke (BD) and is coupled to the distribution of matter in motion in the universe in order to fully incorporate Mach’s Principle. SCC modifies BD in that it allows the scalar field to act on particles and thus violates the equivalence principle. The presence of the scalar field in BD and SCC perturbs space-time. This is the reason BD is not concordant with solar system experiments. However in SCC the scalar field force operates on particles, but not photons, and corrects this perturbation. The geodesics of test particles and photons are the same in SCC as GR. SCC is concordant with all experiments to date, however there are several tests that easily falsify the theory, which do not test whether trajectories follow GR geodesics . One of these tests is being carried out at present, the Gravity Probe B satellite experiment, and the results will be known next year.

SCC has two conformal frames of measurement, the Jordan frame in which particle masses increase with gravitational potential energy and in which gravitational trajectories and cosmological evolution are calculated, and the Einstein frame in which particle masses are constant and in which other physics is most easily described.

The cosmological solution is not in general a vacuum solution, therefore SCC cosmology differs from that of GR. The empty universe solution reduces to the GR Milne model. When the Jordan conformal frame cosmological solution, (which turns out to be the same as Einstein's original cylindrical static model) is transformed into the SCC Einstein conformal frame it turns out to be a strictly linearly expanding solution - that is
it provides the linear expansion mechanism for the FCM.


Two differences with the LCDM standard model of GR is that the FCM predicts a baryon density of around 0.2 closure density, in other words there is no need for exotic Dark Matter, and the primordial output of the BB had high metallicity compared to the standard GR BBN. In other words DM does actually exist but originally it was baryonic and only now resides in some dark form. The question for the FCM and the SCC theory is: "In what form is this matter today?"

One clue is the ubiquitous presence of
1. re-ionisation in the IGM and
2. metallicity in early Lyman alpha forests.

These may be evidence of a fairly isotropic background of PopIII stars that formed at around z = 20. From the paper A very extended re-ionisation epoch? there is also a suggestion that there was a late period of Pop III star re-ionisation that finished at z>=10.5. This would then date the end of such stars, the ‘transition red shift’.

As a comparison therefore, the active lifetime of Pop III stars in the two models is calculated to be: (Using LCDM values for the GR model)

For the onset of metallicity, i.e. 'ignition' of Pop III stars, z = 20
tz=20 = 182 Myrs. in GR
tz=20 = 657 Myrs. in SCC

for the transition period, i.e. the end of Pop III stars, z = 10.5
tz=10.5 = 450 Myrs. in GR
tz=10.5 = 1.31 Gyrs. in SCC

Thus the active lifetime of Pop III stars is
~270 Myrs in GR and ~650 Myrs in SCC, i.e. over twice as long. Note that if this late re-ionisation period does not in fact exist then the transition period is much earlier and the Pop III lifetimes drastically reduced.

However how massive are PopIII and how many of them were there? The SCC speculation is that given the primordial gas (PG) had some metallicity
[Fe/H] = log10(NFe/NH)PG - log10(NFe/NH)Solar = -5
that the first PopIII stars could be smaller than the standard model allows. Metallicity is important in radiating away heat to allow the proto-stars to collapse. The range ([102 - 104]Msolar) is suggested as they would leave behind IMBHs or the same mass range and this range seems to be concordant with observation. So DM consists of a background of IMBHs in the range [102 - 104]Msolar.

Will this idea work, that is does the hypothesis that DM consists largely of IMBHs fit observation?

Garth
 
Last edited by a moderator:
Space news on Phys.org
  • #2
Garth said:
... So DM consists of a background of IMBHs in the range [10^2 - 10^4]Msolar.

Will this idea work, that is does the hypothesis that DM consists largely of IMBHs fit observation?
IMBH abundance is not ruled out, but is controversial. A sampler:

http://arxiv.org/abs/astro-ph/0405355
Cosmic Star Formation, Reionization, and Constraints on Global Chemical Evolution

http://arxiv.org/abs/astro-ph/0302101
Intermediate-Mass Black Holes in the Universe: A Review of Formation Theories and Observational Constraints


http://arxiv.org/abs/astro-ph/0202218
Constraints on primordial black holes and primeval density perturbations from the epoch of reionization

http://arxiv.org/abs/astro-ph/9902028
Constraints on the mass and abundance of black holes in the Galactic halo: the high mass limit

http://arxiv.org/abs/astro-ph/9511032
Constraints on massive black holes as dark matter candidates using Galactic globular clusters
 
  • #3
Thank you Chronos for those interesting links, I wonder whether IMBHs have not already been detected and mis-identified as MACHO's POINT-AGAPE Pixel Lensing Survey of M31 : Evidence for a MACHO contribution to Galactic Halos.

The main controversy in accepting IMBHs as the major component of DM is mainstream BBN constrains [tex]\Omega_{baryon} = 0.04[/tex]. My question is: if this limitation is lifted (by the FCM BBN) could the DM identification problem be solved?

Note: in my thread "Submitted Research: Self Creation Cosmology, by Garth" in the IR Forum ZapperZ rather cynically asked:
I don't know if this is the appropriate place to ask this, but you have been here long enough to be able to answer this. In all the interactions you have had on here, who do you think has the expertise to be able to either comment, criticize, or judge the validity of your work?
My answer, which I never posted in that thread, was that yes there are some such as yourself Chronos, and others such as the 'Mentors', who have made constructive comments and criticisms of my work. One of the greatest contributions has been in providing such relevant links to physics ArXiv papers, and other academic web pages, as yours above, which enable me to keep up to date with a multitude of developments that otherwise I might well have missed. Thank you.

Garth
 
Last edited:
  • #4
I think Zz has a point [albeit a little pessimistic], but any kind of reasonably informed feedback would seem better than nothing [not to mention we work pretty cheap]. Anyways, I have another recent selection that might be of interest:

http://arxiv.org/abs/astro-ph/0507439:
Title: Heavy Element Production in Inhomogeneous Big Bang Nucleosynthesis
 
Last edited by a moderator:
  • #5
Retraction and correction of Self Creation Cosmology GP-B prediction

Retraction
Since publishing my 2002 paper I have been pleased to discover that the Gravity Probe B satellite appeared to provide a test that could falsify SCC. Earlier I repeated the prediction in this thread.
Garth said:
1. GPB Geodetic precession
SCC: 5.5120 arcsec/yr
GR: 6.6144 arcsec/yr

GPB gravitomagnetic frame dragging precession
SCC = GR = 0.0409 arcsec/yr
The SCC prediction is more complicated than the GR calculation as freely orbiting bodies have an extra, Newtonian-like, scalar field force acting on them (but not on photons). Over the years I have worried that I may not have included all the extra factors complicating the calculation.

In all other solar system experiments the scalar field force exactly compensated for the perturbation of space-time curvature from the GR value. I worried that this did not appear to happen in the case of geodetic precession.

Today, to my dismay, I have realized that my geodetic calculation does not include the Thomas precession on the GP-B gyroscopes properly.

When the effect of the Thomas precession, due to the scalar field force accelerating the gyroscopes, is taken into account the SCC geodetic precession is equal to that of GR

So the above prediction should read:
1. GPB Geodetic precession
SCC = GR = 6.6144 arcsec/yr
GPB gravitomagnetic frame dragging precession
SCC = GR = 0.0409 arcsec/yr

Falsification of SCC will now depend on somebody performing the definitive test, which is photons are predicted to 'fall' at a rate 3/2 that of particles.

A horizontal laser such as the LIGO interferometers, compared to the Earth, should be perturbed towards the Sun. With a 8 km light path the perturbation is predicted to be 2 x 10-12 metres.

I will be publishing this correction shortly.

Garth
 
Last edited:
  • #6
Garth said:
Falsification of SCC will now depend on somebody performing the definitive test, which is photons are predicted to 'fall' at a rate 3/2 that of particles.
Does SCC envision any refractive effect to produce this discrepant infall rate? If not, how do you model a gravitational force that manages to affect massless photons 50% more efficiently than massive particles?
 
  • #7
turbo-1 said:
Does SCC envision any refractive effect to produce this discrepant infall rate? If not, how do you model a gravitational force that manages to affect massless photons 50% more efficiently than massive particles?
This is at the heart of SCC and why the split laser beam interferometer would be a definitive test; if photons do fall at the same rate as particles then SCC is dead in the water, there would be no resurrection, SCC would then simply be another of the invariant conformal gravity theories that have only re-writen GR in some inconvenient coordinate system.

In Brans Dicke an extra scalar field is introduced that is coupled to the trace of the stress-energy-momentum tensor (matter) by a coupling constant [itex]\lambda[/itex]. Its presence perturbs the curvature of space-time and consequently BD is only concordant with solar system experiements if [itex]\lambda[/itex] is vanishingly small. This has led to its demise.

SCC introduces a principle of mutual interaction (PMI), which states that the scalar field is a source for the matter-energy field if and only if the matter-energy field is a source for the scalar field, by coupling

[tex]\nabla _\mu T_{M\quad \nu }^{\quad\mu }[/tex] to [tex]T_M[/tex], thus:

[tex]\begin{equation}
\nabla _\mu T_{M\;\nu }^{.\;\mu }=f_\nu \left( \phi \right) \Box \phi =4\pi
f_\nu \left( \phi \right) T_{M\;}^{\;}\text{ ,} \notag \end{equation}[/tex]

so that for an electro-magnetic field, which is trace-free,

[tex]T_{em} =0[/tex],

[tex]\begin{equation}
\nabla _\mu T_{em\quad \nu }^{\quad \mu }=4\pi f_\nu \left( \phi \right)
T_{em}^{\;}=4\pi f_\nu \left( \phi \right) \left( 3p_{em}-\rho _{em}\right)=0 \notag
\end{equation}[/tex].

Photons thus travel on null-geodesics, whereas particles do not.

A remarkable feature of the PMI violation of the equivalence principle is that this ’scalar field force’ acts in a similar fashion to the gravitational force. It is proportional to the product of the masses of two freely falling bodies and inversely proportional to the square of their separation. Thus, if this force exists, it would be convoluted with the Newtonian gravitational force and affect the value of the Newtonian gravitational constant in all Cavendish type experiments.

Eotvos-type experiments, asking whether, "atoms all fall at the same rate", which tests the equivalence principle for different types of matter, would only find a violation at the 10-17 level, three orders of magnitude smaller than present experimental sensitivity. (Such violation depending on
[tex]\frac{p}{\rho c^2}[/tex] for the materials being studied.)

The scalar field thus exerts an extra force, which acts on freely-falling ,particles but not photons, that perturbs them from their geodesic trajectories. It works out that this scalar-field force exactly compensates for the perturbation of space-time by the BD scalar field. Particles and photons both travel along the geodesics and null-geodesics of GR, the theory is conformally equivalent to canonical GR in vacuo, and thus all experiments (including now the GP-B geodetic measurement) that verify GR also verify SCC.

The crucial difference is the direct measurement of the rate of acceleration of photons and particles in a gravitational field; an extension of the Eotvos experiments: "Do particles and photons 'fall at the same rate'?


Garth
 
Last edited:
  • #8
Retraction of the Retraction : Self Creation Cosmology GP-B prediction

Retraction of the Retraction!

GP-B is back as a resolution of the degeneracy between SCC and GR!

As I said above the geodetic precession (SCC precession = 5/6 GR precession) has to be corrected in SCC by a Thomas precession. (that caused by the acceleration of a spin-axis in 4-space)

The Thomas precession for SCC is 1/6 the GR geodetic precession, so above I worried that the total N-S GP-B precession rate was going to be (5/6 + 1/6) GR geodetic precession = GR = 6.6"/yr. and if GP-B returned that value, as everybody expects, then SCC would be lost in the dust!

However after a careful analysis I have realized that the Thomas precession has to be subtracted from the geodetic and so the SCC prediction is:

(5/6 - 1/6)GR geodetic precession = 2/3 = 4.4096"/yr and we are back in business as soon as I have time to publish the correction.

So the above prediction should now read:
1. GPB Geodetic precession
SCC = 4.4096 arcsec/yr
GR = 6.6144 arcsec/yr

GPB gravitomagnetic frame dragging precession
SCC = GR = 0.0409 arcsec/yr

We wait and see! :smile:
Garth
 
  • #9
Your model will fail for devious reasons, Garth. Look at the way they will apply corrections to GPB results. They will cancel out the very effects you are looking for. What I'm saying is you need the raw data to apply your model. Does that make any sense? You have a lot of work to do!
 
  • #10
Chronos said:
Your model will fail for devious reasons, Garth. Look at the way they will apply corrections to GPB results. They will cancel out the very effects you are looking for. What I'm saying is you need the raw data to apply your model. Does that make any sense? You have a lot of work to do!
Thank you Chronos for your comment.

The GP-B team are being very careful not to prejudge the issue, that is why the experiment's two sets of data are being kept separate.

The angular displacements of the gyros have to be related to the satellite telescope’s initial position, rather than its final position directed towards IM Pegasi.

The motion of IM Pegasi with respect to a distant quasar has been measured with extreme precision over a number of years using Very Long Baseline Interferometry (VLBI) by a team at the Harvard-Smithsonian Center for Astrophysics (CfA).

However, to ensure the integrity of the GP-B experiment, a ”blind” component was added to the data analysis by insisting that the CfA withhold the proper motion data until the rest of the data analysis is complete.

I trust the team to be objective in their analysis, whatever their result may be. If my model fails because it really has been falsified by experimental observation, then so be it, at least it has had the strength to be falsifiable!

Garth
 
Last edited:
  • #11
May I toss you another bone, Garth? I like the way you think, so I'm always on the lookout things like this:

You Can't Get There From Here: Hubble Relaxation in the Local Volume
http://www.arxiv.org/abs/astro-ph/0512323
 
  • #12
Chronos said:
May I toss you another bone, Garth? I like the way you think, so I'm always on the lookout things like this:
You Can't Get There From Here: Hubble Relaxation in the Local Volume
http://www.arxiv.org/abs/astro-ph/0512323
Yes I had seen it, the bone for me to chew on is the statement:
The most straightforward explanation (though not the only possible) is that there exists a large quantity of baryonic matter in this region so far undetected, and unassociated with galaxies or groups.
SCC predicts far more baryonic matter (all DM) than the standard model, perhaps this observation is picking it up?

Garth
 
  • #13
That's pretty much why I culled that one out for your viewing pleasure, Garth! SCC has some interesting implications. I would like to think I'm not oblivious to them. Besides, I'd like to be remembered as the guy who stood up for you before you became famous! And if not, at least we drank wine together while proseletizing.
 
  • #14
Chronos said:
And if not, at least we drank wine together while proseletizing.
:approve:
I'll drink to that!

Garth
 
  • #15
Looking further into DM in the SCC scenario I wish to draw across a link from the Addressing Impossibilities in the Standard Cosmological Model thread.

The question in hand is that in SCC the overall desnity parameter
[itex]\Omega_{Total} = 0.33[/itex]
of which one third is false vacuum energy
[itex]\Omega_{False Vacuum} = 0.11[/itex]
(This false vacuum energy is detected by the Casimir force. The theory predicts that the Casimir force is limited with a maximum determined by the gravitational potential [itex]\frac{GM}{rc^2}[/itex]. The limit being detectable somewhere between the orbits of Jupiter and Saturn with present sensitivities.)

This leaves a residue of [itex]\Omega_{residue} = 0.22[/itex]

Nucleosynthesis in SCC is to be calculated in the Einstein conformal frame in which atomic masses are constant. In this frame the universe expands strictly linearly, it is a Freely Coasting Model (FCM).

Now, the FCM model predicts a BBN baryonic density of about
[itex]\Omega_{baryon} = 0.2[/itex]
which ties in nicely with the SCC prediction, as there is the odd 1% or so of neutrino density to include, together with other possible as yet undiscovered particle species.

The question is in what form is this baryonic material now to be found?

One possibility is that it resides in a population of IMBHs of around (102 - 104)MSolar.

A scenario thus presents itself: Out of the BB and close on the epoch of combination at the CMB Surface of Last Scattering many large PopIII stars form in the mass range ~ (102 - 104)MSolar. (SCC primordial metallicity is not zero as in the standard model but about [-5] (i.e. ~ 10-5 Solar) so these PopIII stars can form with smaller masses and more numerously than in the standard scenario.) After a short time < 106 years, these stars go hyper-nova (thus producing long-GRBs) and leave behind IMBHs of about half the progenitor's mass.

One question with this scenario was what about the ejected material that was not drawn into the IMBH?

Although PopIII stellar evolution is very hazy, virial arguments might suggest that half the mass is drawn into the IMBH and about half ejected. So, unless this ejected material went on to form further giant stars and further BHs there should be a lot of it left behind.

SpaceTiger's link in the other thread to http://lanl.arxiv.org/abs/astro-ph/0501126" provides an answer: it is WHIM!

[itex]\Omega_b[/itex]WHIM (≥ 7 × 1014) = (2.4+1.9−1.1) × 10−[O/H]−1 %, consistent with both model predictions and the actual number of missing baryons.
[O/H] is needed; now in Table 1 they state at:
z = 0.011 [O/H] > -1.47 and at
z = 0.027 [O/H] > -1.32,
so the upper limit is:
[itex]\Omega_b[/itex]WHIM > 4.3 × 100.47 % = 12.6%
and the lower limit:
[itex]\Omega_b[/itex]WHIM > 1.3 × 100.32 % = 2.7%?

Which is indeed consistent with the standard model of about [itex]\Omega_b[/itex] = 0.04, but also with a much higher [itex]\Omega_{WHIM}[/itex] allowed by the FCM BBN.

The residue density in SCC now appears to be made up of:
[itex]\Omega[/itex]Observed galactic matter = 0.003
[itex]\Omega[/itex]WHIM ~ 0.1
[itex]\Omega[/itex]IMBH ~ 0.1
[itex]\Omega[/itex]Neutrinos, other matter, etc. = 0.017
Total: [itex]\Omega_{residue} = 0.22[/itex]

Of course the numbers can be played about with a bit, the ratio of WHIM/IMBH being most plastic!

Garth
 
Last edited by a moderator:
  • #16
Garth said:
[itex]\Omega[/itex]Observed galactic matter = 0.003
[itex]\Omega[/itex]WHIM ~ 0.1
[itex]\Omega[/itex]IMBH ~ 0.1
[itex]\Omega[/itex]Neutrinos, other matter, etc. = 0.017
Total: [itex]\Omega_{residue} = 0.22[/itex]

So your model does require a sort of dark matter, does it not? You were forced to include an ad hoc term of [itex]\Omega[/itex]IMBH ~ 0.1 in order to put your number into concordance with your model's prediction. Such a population of black holes, although possible, is certainly not expected in a naive cosmological model. Why is this less ad hoc than weakly-interacting particles?

In addition, what you call the "false vacuum energy":

[itex]\Omega[/itex]False Vacuum ~ 0.11

is really a form of dark energy, is it not? And does it not fall victim to the same fine-tuning problem as the cosmological constant of the standard cosmological model?
 
  • #17
ST Thank you for your important questions, let me take them one by one.
SpaceTiger said:
So your model does require a sort of dark matter, does it not? You were forced to include an ad hoc term of [itex]\Omega[/itex]IMBH ~ 0.1 in order to put your number into concordance with your model's prediction. Such a population of black holes, although possible, is certainly not expected in a naive cosmological model. Why is this less ad hoc than weakly-interacting particles?
Yes, I have always said the model requires dark matter, the difference though between SCC and GR is that this is baryonic dark matter.

All the gravitational theory actually predicts is that
[itex]\Omega[/itex]Matter = 0.22, as the observed matter is much smaller than this the question SCC leaves unanswered is in what form this dark matter might be. The answer to this secondary question comes in several parts.

First is the Indian team's work on the FCM model, which quite independently found [itex]\Omega[/itex]baryon ~ 0.2, thus the DM is baryonic in this model, and also allows a little leeway for the neutrino density to be squeezed in.

The second part was your acknowledgment that IMBH's would fit the bill, except that in the mainstream model it is constrained by BBN
[itex]\Omega[/itex]baryon = 0.04. However in SCC without this constraint I was at a loss to explain why the IMBH formation efficiency was almost 100%.

The third part, your last link for me, answered that latter question; the IMBH formation rate need not be 100%; 50% IMBH and the remainder as WHIM was consistent with observation. Also a 50/50 IMBH/ejected-gas model makes good sense in a 'hand waving' sort of way. (My arms are going like windmills at the present!)

Thank you!
In addition, what you call the "false vacuum energy":
[itex]\Omega[/itex]False Vacuum ~ 0.11
is really a form of dark energy, is it not? And does it not fall victim to the same fine-tuning problem as the cosmological constant of the standard cosmological model?
Astrophysics is the science of understanding what goes on
'up there' (astro-) by understanding what goes on 'down here', in the lab, (-physics). Cosmology applies this understanding to the largest possible observable scales. Up to the 1970's it did a good job, the cosmological theory, GR, having been well established in laboratory and solar system experiments. However that relationship between cosmological theory and laboratory science began to change with the intoduction of Inflation, then DM then DE, which were introduced a posteriori to make the model fit without laboratory confirmation (yet?).

The false vacuum energy density is predicted by SCC in the local vicinity and is discovered in the laboratory as the Casimir force. The spherically symmetric solution is over determined in the theory yielding two separate solutions, but in flat space both solutions converge. However, they slightly diverge in the presence of space-time curvature and require a precise small false vacuum energy for consistency. This prediction may be tested if an experiment measuring the Casimir force is launched into the trans-Saturian solar gravitational field. This false vacuum energy requirement in the cosmological solution yields the [itex]\Omega[/itex]false vacuum = 0.11. This is a predicted value by the cosmological equations (see http://www.kluweronline.com/oasis.htm/5092775) and is not fine tuned to fit.

Please continue to provide constructive criticism, I value your comments a lot.

Garth
 
Last edited:
  • #18
Garth said:
Yes, I have always said the model requires dark matter, the difference though between SCC and GR is that this is baryonic dark matter.

...

The third part, your last link for me, answered that latter question; the IMBH formation rate need not be 100%; 50% IMBH and the remainder as WHIM was consistent with observation. Also a 50/50 IMBH/ejected-gas model makes good sense in a 'hand waving' sort of way. (My arms are going like windmills at the present!)

Fair enough, but I'm not sure you've answered my question. Why is this less ad hoc than non-baryonic dark matter? It's much easier for mainstream physics to explain a weakly interacting and abundant particle species than a population of intermediate mass black holes that makes up 10% of the closure density. How do you envision them being formed?
The false vacuum energy density is predicted by SCC in the local vicinity and is discovered in the laboratory as the Casimir force.

Two things. First, I don't understand how you're distinguishing this from the traditional quantum explanation for the cosmological constant. In fact, as far as I can tell, your [tex]\Omega_{False Vacuum}[/tex] is equivalent to [tex]\Omega_{\Lambda}[/tex] from the quantum point of view. I understand that your theory of gravity is different, but you seem to be invoking the same source for the "dark energy" as in the most popular mainstream models.

Second, measurements of the Casimir Force tell us about the existence of the vacuum energy, but they tell us nothing of its magnitude. A measurement of a "force" is basically a measurement of dE/dx, not of E0. It's the latter that you need to constrain [tex]\Omega_{False Vacuum}[/tex].
This false vacuum energy requirement in the cosmological solution yields the [itex]\Omega[/itex]false vacuum = 0.11. This is a predicted value by the cosmological equations (see http://www.kluweronline.com/oasis.htm/5092775) and is not fine tuned to fit.

The fine-tuning problem comes from the quantum end of things, not the cosmological end. You may be thinking of the less severe "cosmic coincidence problem", which asks why the cosmological constant would suddenly be turning on at this moment in cosmic history. For more information on the fine-tuning and cosmic coincidence problem, check out this paper:

http://lanl.arxiv.org/abs/astro-ph/0202076"
 
Last edited by a moderator:
  • #19
SpaceTiger said:
Fair enough, but I'm not sure you've answered my question. Why is this less ad hoc than non-baryonic dark matter?
Only in that it does not require the invocation of an unknown/undetected species of fundamental particle.
It's much easier for mainstream physics to explain a weakly interacting and abundant particle species than a population of intermediate mass black holes that makes up 10% of the closure density. How do you envision them being formed?
From a dense ensemble of moderate mass (102 - 103 MSolar)PopIII stars, which also give rise to ionisation and (enhanced) metallicity in the early universe.
The false vacuum energy density is predicted by SCC in the local vicinity and is discovered in the laboratory as the Casimir force.
Two things. First, I don't understand how you're distinguishing this from the traditional quantum explanation for the cosmological constant. In fact, as far as I can tell, your [tex]\Omega_{False Vacuum}[/tex] is equivalent to [tex]\Omega_{\Lambda}[/tex] from the quantum point of view. I understand that your theory of gravity is different, but you seem to be invoking the same source for the "dark energy" as in the most popular mainstream models.
There is a subtle distinction between the cosmological constant on the left hand side of the GR field equation, which describes space-time curvature, and the false vacuum on the right hand side of the FE, which is entered as its density and pressure is a source of gravitation. One is the source (RHS) the other is the effect (LHS). This distinction is often blurred and confused because the 'equation of states' of the cosmological constant and false vacuum energy are almost identical, differing by the amount [itex]g_{\mu \nu}[/itex] differs from [itex]\eta_{\mu \nu}[/itex], (although the equations of state are exactly identical in one quantum loop vacuum fluctuation.)

The local and tested Schwarzschild solution of GR has nothing to say of vacuum energy, in GR vacuum is simply that, empty vacuum of zero density and pressure. However the modern cosmological solution has to include a 'false vacuum' energy of some sort to 'balance the books', and of course it is also has the advantage of including quantum effects.

In SCC the local spherically symmetric solution requires the existence of a moderate false vacuum energy density for consistency. This is detectable, as the Casimir force, and makes a falsifiable prediction in the far solar system Casimir experiment proposed. It was not therefore a surprise when it then also cropped up in the cosmological solution. See my definition of astrophysical cosmology in my earlier post above.
Second, measurements of the Casimir Force tell us about the existence of the vacuum energy, but they tell us nothing of its magnitude. A measurement of a "force" is basically a measurement of dE/dx, not of E0. It's the latter that you need to constrain [tex]\Omega_{False Vacuum}[/tex].
Agreed, but SCC suggests that there is a natural renormalised 'cut-off' Emax determined, and therefore limited by, the field equations of the gravitational theory. In the cosmological solution this resolves the 'Lambda problem'.

It is the cut-off, detected as a rounding off at the maximum Casimir force as two plates are brought arbitrarily close together, which measures the vacuum energy density.

The false vacuum density [itex]\rho = -p_{max} [/itex].
The pressure, Casimir force/plate area, is given by
[tex]p_{max} = \frac{F}{A} = -(\frac{\pi hc}{480})z^{-4}[/tex]
where z is the plate separation at maximum Casimir force.

SCC suggests this density and hence maximum Casimir pressure is limited, the standard theory says it is (almost) infinite, a difference that can be resolved experimentally. (As I have said, with present experimental sensitivity the round off should be detected in the Solar field between the orbits of Jupiter and Saturn.)
The fine-tuning problem comes from the quantum end of things, not the cosmological end. You may be thinking of the less severe "cosmic coincidence problem", which asks why the cosmological constant would suddenly be turning on at this moment in cosmic history. For more information on the fine-tuning and cosmic coincidence problem, check out this paper:

http://lanl.arxiv.org/abs/astro-ph/0202076"
Yes that is a fine paper, thank you. However my comment on 'fine tuning' was simply responding to your question:"does it (SCC) not fall victim to the same fine-tuning problem as the cosmological constant of the standard cosmological model?" As I said [tex]\Omega_{False Vacuum} = 0.11[/tex] is determined by the field equation and not a free parameter that is adjusted to fit observations.

Garth
 
Last edited by a moderator:
  • #20
Garth said:
Only in that it does not require the invocation of an unknown/undetected species of fundamental particle.

It appears to require the existence of a much less plausible population of objects!

From a dense ensemble of moderate mass (102 - 103 MSolar)PopIII stars, which also give rise to ionisation and (enhanced) metallicity in the early universe.

So your model requires that for every solar mass of material that went into low-mass stars (<~ 1 solar mass) in the early universe, roughly one hundred solar masses went into IMBHs? This requires both an extremely top-heavy mass function and an extremely large star formation efficiency. Do you have any references that even suggest that such a thing might be possible? To my knowledge, it's difficult to even form structure in a purely baryonic universe. Dark matter halos act as seeds for the formation of galaxies and clusters.

In SCC the local spherically symmetric solution requires the existence of a moderate false vacuum energy density for consistency.

So, if I'm understanding you correctly, the difference is not that your model doesn't include dark energy, it's that your model requires it to survive. I suppose this can be viewed as a benefit, but it's a bit deceptive to say that you don't need dark energy.

Agreed, but SCC suggests that there is a natural renormalised 'cut-off' Emax determined, and therefore limited by, the field equations of the gravitational theory. In the cosmological solution this resolves the 'Lambda problem'.

So, to be clear, you're not just modifying gravity, but predicting a low high-energy cutoff for QED (and other QFTs) as well? And the value of this cutoff depends upon the local gravitational potential? The paper I linked addresses the issue of introducing a UV cutoff (Emax) to existing theory:

Sahni 2002 said:
One way to avoid this is to assume that the Planck scale provides a natural ultraviolet cutoff to all field theoretic processes, this results in <T00>vac ≃ c5[/sup/G2hbar ∼∼ 1076 GeV4 which is 123 orders of magnitude larger than the currently observed value ρ ≃ 10−47 GeV4. A cutoff at the much lower QCD scale doesn’t fare much better since
it generates a cosmological constant E4QCD 10−3GeV4 – forty orders of magnitude larger than observed. Clearly the answer to the cosmological constant issue must lie elsewhere.


In other words, the simple cutoff you propose appears to be unphysical from the particle physics point of view. Could you elaborate on how SCC changes this fact? Is it more than a theory of gravity?
 
Last edited:
  • #21
Thank you for your comments. It is important to distinguish between what the theory predicts and the astrophysics consequences of that prediction. The main problem as I described above is explaining why the baryonic DM is 'dark' today.
SpaceTiger said:
It appears to require the existence of a much less plausible population of objects!
So your model requires that for every solar mass of material that went into low-mass stars (<~ 1 solar mass) in the early universe, roughly one hundred solar masses went into IMBHs? This requires both an extremely top-heavy mass function and an extremely large star formation efficiency. Do you have any references that even suggest that such a thing might be possible? To my knowledge, it's difficult to even form structure in a purely baryonic universe. Dark matter halos act as seeds for the formation of galaxies and clusters.
Yes, in the early low metallicity universe the IMF is expected to be top heavy as metallicity is important in radiating away energy from the collapsing star. The paper Constraints on the IMF of the first stars in the standard LCDM model suggests that
Indeed, the physical conditions in primordial starforming regions appear to systematically favor the formation of very massive stars. In particular, (i) the fragmentation scale of metal-free clouds is typically 103 M (Abel,Bryan & Norman 2002; Bromm, Coppi & Larson 2002) (ii) because of the absence of dust grains the radiative feedback from the forming star is not strong enough to halt further gas accretion (Omukai & Palla 2003). (iii) since the accretion rate is as large as 10−3-10−2 Myr−1, the star grows up to >~ 100M within its lifetime (Stahler, Palla & Salpeter 1986; Omukai & Nishi 1998; Ripamonti et al. 2002)...
By modeling the structure of the accretion flow and the evolution of the protostar, Tan & McKee (2004) have recently shown that radiative feedback becomes dynamically significant at protostellar masses ≈ 30M⊙, and are likely to constrain the mass of first stars in the range 100 − 300M⊙.
Although star formation would take longer (the standard Jean's time scale) than in the standard model there is more time available in the SCC early universe because of the strictly linear expansion.
So, if I'm understanding you correctly, the difference is not that your model doesn't include dark energy, it's that your model requires it to survive. I suppose this can be viewed as a benefit, but it's a bit deceptive to say that you don't need dark energy.
I actually say it does not require 'unknown' DE, but I take your point since I often leave off the 'unknown' as it takes too long to describe the vacuum energy's derivation.
So, to be clear, you're not just modifying gravity, but predicting a low high-energy cutoff for QED (and other QFTs) as well? And the value of this cutoff depends upon the local gravitational potential?
Yes
The paper I linked addresses the issue of introducing a UV cutoff (Emax) to existing theory:
In other words, the simple cutoff you propose appears to be unphysical from the particle physics point of view. Could you elaborate on how SCC changes this fact? Is it more than a theory of gravity?
It is just a theory of gravity but one in which the principle of the local conservation of energy in the Jordan conformal frame requires the vacuum in a gravitational field to have a small negative density. There are two points to make: the nature of this vacuum density is concordant with observations of the Casimir force and this is testable in the experiment proposed above. I agree that the simple cut-off "appears to be unphysical from the particle physics point of view", but then would not the same criticism equally apply to vacuum in GR?

Garth
 
Last edited:
  • #22
Garth said:
Thank you for your comments. It is important to distinguish between what the theory predicts and the astrophysics consequences of that prediction. The main problem as I described above is explaining why the baryonic DM is 'dark' today.

Yes, I think we're on the same page here. I wouldn't expect the theory of structure formation in your hypothetical universe to have been worked out in detail, but the problem is that my intuition tells me it's not even close to being possible. Perhaps we can explore this further...


Yes, in the early low metallicity universe the IMF is expected to be top heavy as metallicity is important in radiating away energy from the collapsing star. The paper Constraints on the IMF of the first stars in the standard LCDM model suggests that Although star formation would take longer (the standard Jean's time scale) than in the standard model there is more time available in the SCC early universe because of the strictly linear expansion.

I agree and understand that Pop III stellar populations would be expected to have a top-heavy IMF, but I'm asking for quantitative support for the extreme requirements of your model. Not only do you have to put an utterly negligible amount of the mass into low-mass stars, but you need to put about half of your entire baryon budget into stars at a very early period in the evolution of the universe.

For the former requirement, you would need to find a calculation of a theoretical IMF in which the integrated total mass of these heavy stars exceeded the integrated total mass of stars that we can observe today (i.e. that are long-lived) by a factor of at least 100. Really, the majority of the stars we see in the observable universe are not from such an ancient population, so you probably need more like a factor of ~1000. It might be worth computing this more precisely.

For the latter requirement, you need to show that a reasonable spectrum of density perturbations will naturally evolve such that half of your baryon budget is collapsed into these heavy stars at high-redshift. For this, you will presumably need to do a calculation similar to that of Press & Schechter (see the "Classic Papers" thread for the link), taking into account this extra time that is provided by your linear expansion.

I object because it doesn't sound to me like it would work. I'll gladly concede the point if you can give me quantitative support.


There are two points to make: the nature of this vacuum density is concordant with observations of the Casimir force and this is testable in the experiment proposed above.

One could also then say that some forms of dark energy in the standard model are also concordant with these observations. This would not apply to the more exotic types of dark energy (like quintessence), but those are not the only ones considered in the mainstream model. The zero-point energy has not been ruled out as a source for the apparently accelerating expansion.


I agree that the simple cut-off "appears to be unphysical from the particle physics point of view", but then would not the same criticism equally apply to vacuum in GR?

Absolutely, that's why there's a fine-tuning problem!
 
  • #23
The headache I have with SCC is that primordial baryogenesis had to persist much longer than predicted by most other models. That does not seem to fit observational evidence. Specifically, it does not explain how the universe cooled so quickly.
 
  • #24
SpaceTiger said:
I agree and understand that Pop III stellar populations would be expected to have a top-heavy IMF, but I'm asking for quantitative support for the extreme requirements of your model. Not only do you have to put an utterly negligible amount of the mass into low-mass stars, but you need to put about half of your entire baryon budget into stars at a very early period in the evolution of the universe.
For the former requirement, you would need to find a calculation of a theoretical IMF in which the integrated total mass of these heavy stars exceeded the integrated total mass of stars that we can observe today (i.e. that are long-lived) by a factor of at least 100. Really, the majority of the stars we see in the observable universe are not from such an ancient population, so you probably need more like a factor of ~1000. It might be worth computing this more precisely.
For the latter requirement, you need to show that a reasonable spectrum of density perturbations will naturally evolve such that half of your baryon budget is collapsed into these heavy stars at high-redshift. For this, you will presumably need to do a calculation similar to that of Press & Schechter (see the "Classic Papers" thread for the link), taking into account this extra time that is provided by your linear expansion.
I object because it doesn't sound to me like it would work. I'll gladly concede the point if you can give me quantitative support.
Thank you it is good to see more precisely where the problems lie - the calculations that are needed may take a little time, Rome, and the standard model, wasn't built in a day!
One could also then say that some forms of dark energy in the standard model are also concordant with these observations. This would not apply to the more exotic types of dark energy (like quintessence), but those are not the only ones considered in the mainstream model. The zero-point energy has not been ruled out as a source for the apparently accelerating expansion.
Except the slight matter of a factor of 10140 or so Lambda problem? As you say
Absolutely, that's why there's a fine-tuning problem!

Thank you for your comments.

Garth
 
  • #25
Garth said:
Thank you it is good to see more precisely where the problems lie - the calculations that are needed may take a little time, Rome, and the standard model, wasn't built in a day!

Quite alright, take your time!


Except the slight matter of a factor of 10140 or so Lambda problem?

Yes, the hope remains that a field theory (not just of gravity) will come along in which this balance is quite natural. Meanwhile, others are suggesting that perhaps it's simply a matter of the anthropic principle -- a universe that was lamda-dominated at an earlier time would not have been habitable to life. I'm not the person to ask about particle physics, however, I work mainly on the cosmological side. :smile:
 
  • #26
Chronos said:
The headache I have with SCC is that primordial baryogenesis had to persist much longer than predicted by most other models. That does not seem to fit observational evidence. Specifically, it does not explain how the universe cooled so quickly.
BBN continued far longer in the FCM, the universe did not cool as quickly, that is the point. But how do you measure the rate at which the universe cooled except from the BBN relative abundances?

The problem the FCM/SCC model does have is with deuterium, which instead has to be made by a spallation process, perhaps in shocks associated with the formation and demise of PopIII stars.

From Concordant "Freely Coasting Cosmology"
Energy conservation, in a period where the baryon entropy ratio does not change, enables the distribution of photons to be described by an effective temperature T that scales as a(t)T = constant. With the age of the universe estimated from the Hubble parameter being ~ 1.5 × 1010 years, and T0 ~ 2.7K, one concludes that the age of the universe at T ~ 1010K would be some four years [rather than a few seconds as in standard cosmology]. The universe would take some 103 years to cool to 107K. With such time periods being large in comparison to the free neutron life time, one would hardly expect any neutrons to survive at temperatures relevant for nucleosynthesis.

However, with such a low rate of expansion, weak interactions remain in equilibrium for temperatures as low as ~ 108K. The neutron - proton ratio keeps falling as n/p ~ exp[−15/T9]. Here T9 is the temperature in units of 109K and the factor of 15 comes from the n-p mass difference in these units. There would again hardly be any neutrons left if nucleosynthesis were to commence at (say) T9 ~ 1. However, as weak interactions are still in equilibrium, once nucleosynthesis commences, inverse beta decay would replenish neutrons by converting protons into neutrons and pumping them into the nucleosynthesis channel. With beta decay in equilibrium, the baryon entropy ratio determines a low enough nucleosynthesis rate that can remove neutrons out of the equilibrium buffer at a rate smaller than the relaxation time of the buffer. This ensures that neutron value remains unchanged as heavier nuclei build up. It turns out that for baryon entropy ratio [itex]\eta[/itex] ~ 5×10−9, there would just be enough neutrons produced, after nucleosynthesis commences, to give ~ 23.9% Helium and metallicity some 108 times the metallicity produced in the early universe in the standard scenario. This metallicity is of the same order of magnitude as seen in lowest metallicity objects.

The only problem that one has to contend with is the significantly low yields of deuterium in such a cosmology. Though deuterium can be produced by spallation processes later in the history of the universe, it is difficult to produce the right amount without a simultaneous over production of Lithium [19]. However, as pointed out in [1], the amount of Helium produced is quite sensitive to [itex]\eta[/itex] in such models. In an inhomogeneous universe, therefore, one can have the helium to hydrogen ratio to have a large primordial variation. Deuterium can be produced by a spallation process much later in the history of the universe. If one considers spallation of a helium deficient cloud onto a helium rich cloud, it is easy to produce deuterium as demonstrated by Epstein [19] - without overproduction of Lithium.

Interestingly, the baryon entropy ratio required for the right amount of helium corresponds to [itex]\Omega[/itex]b ~ 0.2.

Garth
 
Last edited:
  • #27
Why n0bdy uses information quants?;
 
  • #28
murdoc said:
Why n0bdy uses information quants?;
:confused:

And welcome to these Forums murdoc!
Would you like to restate your question, or was it just an observation?


Garth
 
  • #29
There is a further reported conundrum with the Mainstream theory that may be resolved in http://en.wikipedia.org/wiki/Self_creation_cosmology .

As recently posted by Chronos in the thread quasar anomalies M.R.S. Hawkins of the Royal Observatory Edinburgh, published a paper Time Dilation and Quasar Variability that reported a problem with the concept of cosmological expansion. It appears that the variablitiy of distant quasars does not appear to show cosmological time dilation as the standard model requires.

The mystery deepens because cosmological time dilation is observed in distant Supernova and slow Gamma Ray Burster light curves.

There may be other explanations for this anomaly, however, SCC provides a ready solution to this problem.

In the SCC Jordan frame the universe is static and red shift is due to a variable mass effect.

In SCC this mass varies with the scalar field (and inversely with G so GM = const.), but only if it is non-degenerate mass.

Fully relativistic energy and degenerate mass, [itex]p = +\frac13 \rho[/itex], is decoupled from the scalar field and would not vary. Hence although, on the one hand, matter accreting onto objects constructed of such matter, for example neutron stars and black holes, would show cosmological red shift, on the other hand, as the central engine is degenerate, there would be no cosmological red-shift effect in the variability of that radiation.

Therefore supernovae and GRBs, which have exploding ordinary stars as their central enegine, would be expected, by SCC, to be affected by the variation in mass and exhibit cosmological time dilation, whereas quasars, which are assumed to have massive black holes as their central engine, would not be expected by the theory to exhibit a dilation in their variation.

This is what seems to be observed.

Garth
 
Last edited by a moderator:
  • #30
Garth said:
Fully relativistic energy and degenerate mass, [itex]p = +\frac13 \rho[/itex], is decoupled from the scalar field and would not vary. Hence although, on the one hand, matter accreting onto objects constructed of such matter, for example neutron stars and black holes, would show cosmological red shift, on the other hand, as the central engine is degenerate, there would be no cosmological red-shift effect in the variability of that radiation.

I'm a bit confused by this argument, Garth. You admit that there should be cosmological redshift (and, presumably, time dilation) in the radiation emitted from the accretion disk, but then claim that the radiation's variability wouldn't show this effect. Since the variable radiation arises from the accreting matter, shouldn't it be dilated?
 
  • #31
SpaceTiger said:
I'm a bit confused by this argument, Garth. You admit that there should be cosmological redshift (and, presumably, time dilation) in the radiation emitted from the accretion disk, but then claim that the radiation's variability wouldn't show this effect. Since the variable radiation arises from the accreting matter, shouldn't it be dilated?
First, this is very new to me and I am only beginning to work out the implications of 'quasar variablity non-time dilation' in the SCC scenario. I may not have it right yet!

The basic premise is that it appears that observations confirm time dilation in the light profiles of distant supernovae, and, less certainly, of long GRBs, yet it is not observed in quasar variability.

One difference between these two classes is that the engines of supernovae and GRBs(?) are exploding non-degenerate (but massive) stars and the engine of a quasars is degenerate matter that has collapsed into a black hole.

Therefore there may be an explanation for this observation in SCC because in that theory the scalar field is coupled to non-degenerate matter and decoupled from degenerate matter.

How would this work?

In SCC there are two conformal frames, in the Einstein frame the cosmos evolves very much as in the mainstream model except the expansion is strictly linear with time, the universe is conformally flat the DM is all baryonic and the DE is a predetermined and measureable amount of false vacuum energy.

In the Jordan frame the universe is static and cylindrical (closed), particle masses increase exponentially with time because of the interaction of the scalar field and rulers 'shrink', cosmological red shift is a variable mass effect.

There are two processes involved in observing quasar variability, which may be understood in the Jordan frame.

The first is the red shift observed in emission lines from the accretion disk. The atoms in the past were less massive and therefore emitted at a lower frequency than at present, which is observed as a cosmological red shift.

Secondly the time scale of the variability is dependent on the size of the accretion disk itself. This diameter is determined by the gravitational field which itself is dependent on the mass of the central black hole.

Whereas the masses of individual atoms increase over cosmological time the mass of the black hole does not, because it does not interact with the scalar field. Its mass increases only through accretion, it does not increase cosmologically.

Therefore the time scale of the variability of the quasars emission should not be red-shift dependent.

The question of whether the light curve of a super nova should show time dilation in a variable mass cosmology was discussed by Narlikar & Arp in the paper: http://www.journals.uchicago.edu/ApJ/journal/issues/ApJL/v482n2/5714/5714.pdf

I would appreciate comment and constructive criticism!

Garth
 
Last edited by a moderator:
  • #32
Garth said:
http://www.journals.uchicago.edu/ApJ/journal/issues/ApJL/v482n2/5714/5714.pdf

Well, I'm far from an expert on variable mass cosmologies, but it appears that their arguments are dependent on the variable mass of the emitting particles, not of the source of gravity. That is, they say that the timescales of atomic emission would be scaled up by their lower masses. In your model, I would think the same argument should apply to the matter accreting onto the black hole, despite the constancy of the mass of the black hole itself.

EDIT: No, I take it back. The emission should be affected by redshift, but the variability is from purely geometric arguments. If the mass of the black hole is the same, then the geometric size should be the same. You're instead stuck with the problem of how the black hole manages to accrete so much with such a strongly diminished Eddington luminosity (goes as the particle masses^3).
 
Last edited by a moderator:
  • #33
SpaceTiger said:
EDIT: No, I take it back. The emission should be affected by redshift, but the variability is from purely geometric arguments. If the mass of the black hole is the same, then the geometric size should be the same. You're instead stuck with the problem of how the black hole manages to accrete so much with such a strongly diminished Eddington luminosity (goes as the particle masses^3).
Then we agree, although I am not so sure I understand how a BH behaves in the Jordan frame of my theory! One key factor is G also varies cosmologically,
m = m0exp(Ht) for normal matter and
G = G0exp(-Ht).

The question is whether the cosmological decrease in G (increase in [itex]\phi[/itex]) applies to the mass, and hence affects the gravitational field, inside the BH event horizon, and as that field smoothly matches the metric outside the event horizon, affects the orbital dynamics of the accretion disk.

Although in the past this enhanced value of G will alleviate the deficient Eddington luminosity somewhat. However, I don't quite understand how a diminished Eddington luminosity is a problem with matter accretion. Surely with less stuff being ejected, it will be easier to accrete mass? Or are you thinking about the last stages of a massive star evolving into a BH?

Garth
 
Last edited:
  • #34
Garth said:
Although in the past this enhanced value of G will alleviate the deficient Eddington luminosity somewhat. However, I don't quite understand how a diminished Eddington luminosity is a problem with matter accretion. Surely with less stuff being ejected, it will be easier to accrete mass? Or are you thinking about the last stages of a massive star evolving into a BH?

A diminished Eddington Luminosity causes two problems. First, it will make it hard to explain the brightest quasars. If the Eddington Luminosity were that much smaller at high redshift, we should see strong luminosity evolution in the quasars. To my knowledge, there are no strong trends in this direction until very high redshift (where the black holes are presumably much smaller on average). The other thing is that it exacerbates the pre-existing problem of explaining how SMBHs reached such large masses by the present epoch and how we see such massive and bright quasars at high redshift. The Eddington Limit, though not strict, certainly is valid to within a factor of order unity. It might be worth pumping some numbers to figure out the limiting luminosities in your model at various redshifts and, along with that, the inferred minimum masses of the black holes hosting high-z quasars. The evolving G does help your case a bit (it cancels out one factor of exp(Ht)), but your Eddington luminosity still diminishes steeply with redshift.
 
  • #35
Thank you SpaceTiger, I was being a bit thick when you first mentioned the Eddington luminosity!

I will obviously have to start thinking about a theory of Super-Eddington accretion...

Garth
 
Back
Top