What Could Cause a Big Crunch in the Expansion of the Universe?

In summary, according to Hawking, the universe is expanding, but if something were to give it an extra push (e.g. dark energy), the expansion would become faster and faster until it came to a stop or reversed. The Big Crunch is an idea that applies if the force of gravity is strong enough to slow the expansion.
  • #106
russ_watters said:
A non-accelerating expansion most certainly does have a horizon, as does a non-expanding universe! All that is required to have a horizon for the "observable" universe is for the universe to be larger in light years than it is old in years. Ie, a universe that is 1 year old and not expanding will have an observable size of 1 ly, a universe 14 billion years old will have an observable size of 14 billion light years.

For a constantly expanding universe that started at a big bang, that requires an expansion to always exceed the speed of light.

There are several horizons you can speak of. The one that was being discussed in [post=2291438]msg #16[/post] is often called the "event horizon". It partitions spacetime into (A) events we have "seen" or that we can "see" if we only wait long enough, and (B) events that we can never see, no matter how long we wait.

If someone speaks of a photon never being able to reach us, then that means they are proposing it is past the event horizon.

The event horizon only exists in an accelerating universe.

It doesn't make sense to speak of an expansion exceeding the speed of light. Expansion is not a velocity, in those units. The units of expansion are basically inverse time. Of it is given in km/sec/Mparsec. But you can also give it as sec-1, and it means the inverse of the time it would take the double in scale factor at the current expansion rate.

There are always regions in the universe that are receding faster that the speed of light. (Using proper time, proper distance co-ordinates.) It is perfectly possible to see a galaxy which is now and always has been receding with a recession velocity greater than c. As long as the expansion is not accelerating, a photon will be passing into new regions with smaller and smaller recession velocities, and eventually into regions where co-moving galaxies are receding at less than light speed and then finally to our own local region, allowing us to see that distant galaxy.

Cheers -- sylas
 
Space news on Phys.org
  • #107
Chalnoth said:
I think you're failing to get my point again, so let me see if I can restate it.
Dear me. As far as I can see from what you have just posted, I have not once failed to get your point, Chalnoth. So I certainly cannot fail "again." You are confusing possibility with optimality and getting the possibility of life mixed up with actual fine-tune optimality for hole abundance.

But a lot of what you say is good. Like this:

I'll simply point out what is required for lots and lots of black holes. You need:

1. A universe that lasts a long time. If it recollapses very quickly, obviously few if any black holes will form.
2. A universe that has structure formation. Stuff has to get clumpy before black holes have a chance to form.
3. At least some of the matter needs to be dissipative. That is, it has to experience friction so that it can collapse more readily.

And I believe that's all of it. Point (1) is met by our universe being nearly flat, and the cosmological constant not being too large. Point (2) is met by a combination of the magnitude of primordial perturbations, the existence of normal + dark matter, and the cosmological constant being very small. Point (3) is met by the existence of the electromagnetic force.

This may not be all that we have to fine-tune for so as to maximize hole production. There may be some particle physics allowing neutron stars to collapse more readily etc etc. But you have given a helpful summary of a good many of the factors.

The problem I have with this, however, is that these are all also requirements for life. And if the requirements for life include the requirements for black holes, then clearly any universe with life in it is going to have lots and lots of black holes. So even if there is an appearance of fine tuning, I don't think we can take it seriously.

You have argued that any universe with life in it is going to have lots and lots of black holes. Unfortunately you have not argued that any universe with life in it is going to be optimized for black holes.

That is, incapable of improvement by a small adjustment.

Surely life is not so difficult to provide as a possibility, in a suboptimal universe. E.g. take our own universe and change one of the parameters which is less critical for life and more critical for holes (like the mass of a certain quark not found in normal earthlike environments). Change our Thirty just slightly, keeping life possible but making the universe suboptimal---there you have your example :-D.

However it sounds like you don't want to take seriously the possibility that the conjectured optimality is correct. Alex Vilenkin (one of the top cosmologists) took it seriously enough to publish a paper trying to disprove the conjectured optimality of parameters. But it is perfectly fine if you Chalnoth don't feel like taking it seriously. I do. It is part of the Baconian empirical tradition, the scientific method and all. Somebody puts up a falsifiable conjecture that looks like it might be correct, so you try to test it and reject it.
 
Last edited:
  • #108
I'll add this recent paper of Fred Adams to the discussion. He tried varying the fundamental constants and sees what stars would be like. (It's not directly applicable to what we are talking about, but still kind of related.)

http://arxiv.org/abs/0807.3697
Stars In Other Universes: Stellar structure with different fundamental constants
Fred C. Adams
Accepted for publication by the Journal of Cosmology and Astroparticle Physics, 29 pages, 6 figures
(Submitted on 23 Jul 2008)
"Motivated by the possible existence of other universes, with possible variations in the laws of physics, this paper explores the parameter space of fundamental constants that allows for the existence of stars. To make this problem tractable, we develop a semi-analytical stellar structure model that allows for physical understanding of these stars with unconventional parameters, as well as a means to survey the relevant parameter space. In this work, the most important quantities that determine stellar properties -- and are allowed to vary -- are the gravitational constant G, the fine structure constant alpha, and a composite parameter C that determines nuclear reaction rates. Working within this model, we delineate the portion of parameter space that allows for the existence of stars. Our main finding is that a sizable fraction of the parameter space (roughly one fourth) provides the values necessary for stellar objects to operate through sustained nuclear fusion. As a result, the set of parameters necessary to support stars are not particularly rare. In addition, we briefly consider the possibility that unconventional stars (e.g., black holes, dark matter stars) play the role filled by stars in our universe and constrain the allowed parameter space."

This paper points out what looks to me like a possible avenue to falsifying the conjecture of optimality. Our universe seems to overproduce small stars. What parameters might (if they could be adjusted) have led to a different mix with more abundant massive ones?
What is required to enable a large cloud to condense before the stars own energy blows gas away and halts the process? I think we are familiar with these considerations (e.g. the Eddington Limit) but in a different context.
 
  • #109
Chalnoth said:
Technically, a non-accelerating universe has a past horizon, meaning there exist objects that have never been in causal contact with us. It doesn't, however, have a future horizon, meaning that eventually everything will be in causal contact with us (or rather, with our position in space-time...we certainly won't be around).

Now that is a safe prediction that no one can call you on in the future.
 
  • #110
Rymer said:
Now that is a safe prediction that no one can call you on in the future.

It is a mathematical theorem concerning a widely used class of models for the universe, and a formal demonstration of the relevance of acceleration to event horizons.
 
  • #111
sylas said:
There are several horizons you can speak of. The one that was being discussed in [post=2291438]msg #16[/post] is often called the "event horizon". It partitions spacetime into (A) events we have "seen" or that we can "see" if we only wait long enough, and (B) events that we can never see, no matter how long we wait.

If someone speaks of a photon never being able to reach us, then that means they are proposing it is past the event horizon.

The event horizon only exists in an accelerating universe.

It doesn't make sense to speak of an expansion exceeding the speed of light. Expansion is not a velocity, in those units. The units of expansion are basically inverse time. Of it is given in km/sec/Mparsec. But you can also give it as sec-1, and it means the inverse of the time it would take the double in scale factor at the current expansion rate.

There are always regions in the universe that are receding faster that the speed of light. (Using proper time, proper distance co-ordinates.) It is perfectly possible to see a galaxy which is now and always has been receding with a recession velocity greater than c. As long as the expansion is not accelerating, a photon will be passing into new regions with smaller and smaller recession velocities, and eventually into regions where co-moving galaxies are receding at less than light speed and then finally to our own local region, allowing us to see that distant galaxy.

Cheers -- sylas

Problem with this is if the expansion VELOCITY is constant -- no acceleration -- then the obvious options are:

1) All acceleration/deceleration forces just happen to cancel.
2) Or there are no acceleration/deceleration forces.

If 2, then General Relativity does not apply -- and recession velocity is 'real' -- and likely Doppler until someone invents another theory. This would indicate that the VELOCITY would be limited by the speed of light. (No event horizon of the type you mention).

So I gather, that you are adopting the '1' solution -- correct?
 
  • #112
Rymer said:
Problem with this is if the expansion VELOCITY is constant -- no acceleration -- then the obvious options are:

1) All acceleration/deceleration forces just happen to cancel.
2) Or there are no acceleration/deceleration forces.

If 2, then General Relativity does not apply -- and recession velocity is 'real' -- and likely Doppler until someone invents another theory. This would indicate that the VELOCITY would be limited by the speed of light. (No event horizon of the type you mention).

So I gather, that you are adopting the '1' solution -- correct?

Non-sequitur. The velocity isn't constant according to all available measurement, and can't be constant consistent with existing physics.

I have said nothing about constant rates of expansion, so I am not adopting either of these counter factuals. I have simply pointed out that you were incorrect to say that event horizons have nothing to do with acceleration.

Cheers -- sylas
 
  • #113
sylas said:
Non-sequitur. The velocity isn't constant according to all available measurement, and can't be constant consistent with existing physics.

I have said nothing about constant rates of expansion, so I am not adopting either of these counter factuals. I have simply pointed out that you were incorrect to say that event horizons have nothing to do with acceleration.

Cheers -- sylas

I have a model that will do exactly that -- constant velocity -- matching data and the data matching a theory derived value. But that is not the point -- what I seem to be finding with this completely different model (be it right or wrong) is again a kind of optimal solution -- it seems to 'suffer' from a form of fine-tuning also. That I find bizarre.

This would seem to imply that there is something very subtle going on a a level we haven't properly identified. Which then leads me to an apparent 'off the wall question':

How confident are we that this 'distance ladder' concept we use for our measurements isn't introducing or masking accelerations/deceleration etc.

This might be one explanation for the linear results I'm seeing -- i.e. our measurement system is introducing it by the ladder scaling back to equivalent nearby distance. If so, this might imply a bigger problem than a theory difference.
 
  • #114
Rymer said:
I like your Thirty.

Question all this seems to be assuming there are really 'black holes'. I'm not sure anymore what the term means. ...
The reason I bring up the question is that, is this the prediction 'black holes' or just very super-compacted matter objects? ...

Have a look at this thread in the nearby Astrophysics forum:
https://www.physicsforums.com/showthread.php?p=2304439#post2304439

An up-to-date model of what a black hole is by two first-rate people.
 
  • #115
marcus said:
Have a look at this thread in the nearby Astrophysics forum:
https://www.physicsforums.com/showthread.php?p=2304439#post2304439

An up-to-date model of what a black hole is by two first-rate people.

Thanks, I've downloaded what I could and bookmarked the rest ... it will take some reading and thought.

The Forum this evening is VERY slow -- seems to be 'service-links' (i.e. advert servers).

Its taking over five minutes for a page to load. Giving up for the evening.
 
  • #116
marcus said:
Dear me. As far as I can see from what you have just posted, I have not once failed to get your point, Chalnoth. So I certainly cannot fail "again." You are confusing possibility with optimality and getting the possibility of life mixed up with actual fine-tune optimality for hole abundance.
No, I have done nothing of the sort, so I fear you have misunderstood me once again.

First, let me state that the simple claim that the universe we observe is optimally fine-tuned for black hole abundance is almost certainly completely wrong. I can think of a rather trivial situation, after all, under which black holes will be vastly more abundant: make G larger. Make G larger by a couple of orders of magnitude, and all matter that collapses will collapse into black holes. There won't be any worry about stars or anything of the sort: black holes will be everywhere. Of course, you'd have to change some other parameters to compensate so that the universe expands as much and becomes as large. But other than that it should work pretty well.

But this doesn't actually impact the hypothesis if the hypothesis is not that our universe is optimally-tuned, but instead that the universe behaves in such a way that the production of black holes is preferred. If that is the hypothesis, then we must limit our search to only the search space where life is possible to falsify the hypothesis.

The correct prediction, then, is that this hypothesis predicts that our universe is such that the direction from our current location in parameter space in which there are more black holes is also a direction in which there are dramatically fewer living beings. This is, of course, very difficult to show. But that's what the hypothesis requires.

For the example of above of varying G, for instance, what this would predict is that making G larger even by a little bit would make life much, much harder, if not impossible, while making G significantly smaller (which would mean fewer black holes) wouldn't be so bad for life. That's what this hypothesis of black hole optimality predicts.

Rymer said:
However it sounds like you don't want to take seriously the possibility that the conjectured optimality is correct.
Not really. It's just that I think it's going to be fantastically difficult to either verify or falsify, and it would require a lot of work that I am, frankly, not interested in performing.
 
  • #117
Chalnoth said:
That's only a failure of our language. .

Your language, or rather your choice of words, not mine...

It's easy enough to describe the behavior of physical systems even if you wish to consider a hypothetical system without any time coordinate.

You must be referring to configuration and phase spaces .. an often useful abstraction made much of by Julian Barbour, but not resorted to when describing bouncing black holes and suchlike. Smolin's summary of his proposals about cosmological natural selection are full of terms implying processes or behaviours,which imply simultaneity or finite duration. These concepts are inappropriate in this context. Here are examples of his description:

Smolin said:
...The world consists of an ensemble E of universes...Black hole singularities bounce and evolve to initial states of expanding universes...At each such creation event there is a small change ... the ensemble E is updated by a discrete series of steps, in each steps one new universe is created for each black hole in an existing universe...after many steps the ensemble converges...

My difficulty about what kind of time folk like Smolin and Davies are imagining here is not allayed by your reassuring claims that:

Well, you just have to recognize that reality is a bit more subtle...
It just means the simplified description is likely missing some important details of the physics which it is trying to describe.

I suspect that those who think in this way have in mind an armchair view of the multiverse rolling by and evolving in front of them. But I fear that this is just wishful imagining. Time is not simple at all, it seems.
 
Last edited by a moderator:
  • #118
Well, again, I think you're worried about superficial things like word choice, when the real meat is only found in the mathematics.
 
  • #119
Chalnoth said:
No, I have done nothing of the sort, so I fear you have misunderstood me once again.
heh heh

First, let me state that the simple claim that the universe we observe is optimally fine-tuned for black hole abundance is almost certainly completely wrong. I can think of a rather trivial situation, after all, under which black holes will be vastly more abundant: make G larger. Make G larger by a couple of orders of magnitude, and all matter that collapses will collapse into black holes.

heh heh. What you suggest would dramatically reduce the number of black holes. Universe collapses in a crunch before star and galaxy formation can get properly started.
Of course, you'd have to change some other parameters to compensate so that the universe expands as much and becomes as large. But other than that it should work pretty well.

The Devil is in the details, Chal. How exactly would you change which parameters? Glad to see you are thinking about this! Alex Vilenkin is a worldclass cosmologist at Tufts and he has unsuccessfully tried to disprove this optimality. If you come up with an idea that actually works, I'm sure he would like to know.

But this doesn't actually impact the hypothesis if the hypothesis is not that our universe is optimally-tuned,...

If? The statement to be refuted is the one about optimally tuned. Let's stick with that. Let's not fudge things so we can sneak "Life" into the discussion.

but instead that the universe behaves in such a way that the production of black holes is preferred. If that is the hypothesis, then we must limit our search to only the search space where life is possible to falsify the hypothesis.

"preferred" is vague. Smolin's statement is a mathematical description of a local max---optimal tuning in other words. "Life" does not enter into the logic.

The correct prediction, then, is that this hypothesis predicts that our universe is such that the direction from our current location in parameter space in which there are more black holes is also a direction in which there are dramatically fewer living beings.

How many "living beings" are there? Can you make that rigorous? Do you know if there are "dramatically few" or "dramatically many"? What you are called the "correct prediction" is vague, I would say mathematically meaningless. And it is not the correct prediction in any case---to get there you first had to assume we aren't talking about optimal tuning.

That's what this hypothesis of black hole optimality predicts.

No, Chal. You have just made a mistake. Get over it.

a lot of work that I am, frankly, not interested in performing.

Good! Just forget about it then. :biggrin:
 
  • #120
oldman said:
... Smolin's arXiv:hep-th/0612185[/URL] of his proposals about cosmological natural selection are full of terms implying [B]processes [/B]or [B]behaviours[/B],which imply simultaneity or finite duration...[/QUOTE]

I would say this is a sharp insight. BTW your link needs fixing.
[url]http://arxiv.org/abs/hep-th/0612185[/url]

I am personally not as interested in the evolving multiverse explanation as I am in the black hole local optimum conjecture. I would like to see that tested more thoroughly and either disproved or not disproved. After that then the floor is open for people to suggest explanations.

But what you point out is right, namely that the evolving mutiverse picture [B]does involve some sort of universal time[/B]. That doesn't automatically rule it out, but it's an important issue!
Obviously he has been thinking about that. A recent Smolin talk on Pirsa is about the "reality of time and the evolution of laws".
[url]http://pirsa.org/08100049/[/url]
A recent Smolin paper is on unimodular gravity (which gives the same results as ordinary GR but has a universal time feature which GR does not.)
[url]http://arxiv.org/abs/0904.4841[/url]
[B]The quantization of unimodular gravity and the cosmological constant problem[/B]
Lee Smolin
22 pages
(Submitted on 30 Apr 2009)
"A quantization of unimodular gravity is described, which results in a quantum effective action which is also unimodular, ie a function of a metric with fixed determinant. A consequence is that contributions to the energy momentum tensor of the form of the metric times a spacetime constant, whether classical or quantum, are not sources of curvature in the equations of motion derived from the quantum effective action. This solves the first cosmological constant problem, which is suppressing the enormous contributions to the cosmological constant coming from quantum corrections. We discuss several forms of uniodular gravity and put two of them, including one proposed by Henneaux and Teitelboim, in constrained Hamiltonian form. The path integral is constructed from the latter. Furthermore, the second cosmological constant problem, which is why the measured value is so small, is also addressed by this theory. We argue that a mechanism first proposed by Ng and van Dam for suppressing the cosmological constant by quantum effects obtains at the semiclassical level."

Personally I don't think it's smart to make up your mind on the universal time issue before all the returns are in. It comes up in unimodular GR, it comes up in Loll's work. Also in standard cosmology. So I prefer to wait and see on that one.

The black hole optimality question does not involve the time issue.
Are the Thirty fine-tuned for hole abundance, or are they not?
By itself this makes no reference to an evolutionary process.
The process business only comes up if you assume that optimality has been established and say "Well, what if the universe is fine tuned for holes? What then? How could that be rationally explained?"
 
Last edited by a moderator:
  • #121
Thanks for correcting my bad link
marcus said:
Personally I don't think it's smart to make up your mind on the universal time issue before all the returns are in. It comes up in unimodular GR, it comes up in Loll's work. Also in standard cosmology. So I prefer to wait and see on that one.
I am certainly not in a mind-making-up position. Too ignorant. Much better to spectate in hope that such matters will get resolved.
 
  • #122
Is it generally agreed among the people here that the universe is expanding? (I would say "expanding in size," but if the universe is infinite in size then that would make no sense.)

Among those of you who are "expansionists," is it generally agreed that the expansion taking place within any finite part of the universe is taking place uniformly, in much the same way as the expansion of a lump of rising bread dough?

Is the rate of expansion steady or accelerating?
 
  • #123
LtDan said:
Is it generally agreed among the people here that the universe is expanding? (I would say "expanding in size," but if the universe is infinite in size then that would make no sense.)

Among those of you who are "expansionists," is it generally agreed that the expansion taking place within any finite part of the universe is taking place uniformly, in much the same way as the expansion of a lump of rising bread dough?

Is the rate of expansion steady or accelerating?

The observable universe is not infinite. The rest is a philosophical debate and at the moment so is the question of 'acceleration'.
 
  • #124
marcus said:
heh heh. What you suggest would dramatically reduce the number of black holes. Universe collapses in a crunch before star and galaxy formation can get properly started.
Not if it's still flat. You'd have to change the initial conditions if G was larger, but that was part of my premise.

marcus said:
The Devil is in the details, Chal. How exactly would you change which parameters? Glad to see you are thinking about this! Alex Vilenkin is a worldclass cosmologist at Tufts and he has unsuccessfully tried to disprove this optimality. If you come up with an idea that actually works, I'm sure he would like to know.
It's not all that difficult.

Take the following situation:

1. G is larger by some factor (say two, for an example).
2. The average density of each component of the universe is smaller by the same factor.

If you then hold everything else the same, and define the primordial perturbations as a fraction of the density (such that their amplitude is cut in half along with the overall density), then we should have a pretty easy model to work with.

First, this makes some pretty simple predictions. It predicts, first of all, that the large-scale properties of the universe will be identical: it will last just as long. Structure will form in much the same way. There will, to first order, be just as many objects with mass m/2 in this hypothetical universe as there are in the current universe with mass m. Now, we might have to be careful in that the nonlinearities of gravity might create more dense objects, but I doubt they would create fewer of them. So I think just taking the number of objects with mass m/2 as in the current universe with mass m is a conservative assumption.

Then we have to ask: how many of these objects are black holes? Well, I was unable to find a closed form for the Tolman-Oppenheimer-Volkoff limit for neutron stars (I'm not sure one exists), but we can take a look at the Chandresekhar limit:

[tex]m_c = \frac{\omega^0_3\sqrt{3\pi}}{2} \left(\frac{\hbar c}{G}\right)^\frac{3}{2}\frac{1}{\left(\mu_e m_H\right)^2}[/tex]

So, in this hypothetical scenario where G is twice as large, then, the Chandresekhar limit is [tex]2^{3/2}[/tex] smaller. If there was to be no change in the number of neutron stars, then the Chandresekhar limit would need to be only half the value it is in our current universe. But this is smaller again by a factor of the square root of two, indicating that many even smaller-mass objects will be made into neutron stars, and so you'll have many, many more.

To avoid this you'd have to show that in actuality, the nonlinearity of gravity makes it so that you end up with far fewer small-mass objects than you'd expect from just taking the simple linear approximation. Or you'd have to show that the TOV limit is actually proportional to 1/G. I sincerely doubt that either is the case.

marcus said:
"preferred" is vague. Smolin's statement is a mathematical description of a local max---optimal tuning in other words. "Life" does not enter into the logic.
But it has to if there is to be any relevance of the claim to observational data.

marcus said:
How many "living beings" are there? Can you make that rigorous? Do you know if there are "dramatically few" or "dramatically many"? What you are called the "correct prediction" is vague, I would say mathematically meaningless. And it is not the correct prediction in any case---to get there you first had to assume we aren't talking about optimal tuning.
Obviously it's a difficult thing to put into numbers. It's certainly beyond the amount of work I've put into it. But it must be done if you're going to try to test claims like this one.
 
  • #125
Rymer said:
The observable universe is not infinite. The rest is a philosophical debate and at the moment so is the question of 'acceleration'.
The accelerated expansion of the universe is hardly a philosophical debate. It's an observational fact.
 
  • #126
LtDan said:
Is it generally agreed among the people here that the universe is expanding? (I would say "expanding in size," but if the universe is infinite in size then that would make no sense.)
Absolutely. There is just no reasonable doubt that the universe is expanding. The evidence is quite conclusive.

LtDan said:
Among those of you who are "expansionists," is it generally agreed that the expansion taking place within any finite part of the universe is taking place uniformly, in much the same way as the expansion of a lump of rising bread dough?
This is unclear. We can only make definitive claims about our region of the universe, and cannot seriously consider very different regions.

LtDan said:
Is the rate of expansion steady or accelerating?
The evidence says accelerating.
 
  • #127
Hi Chal, I suspect your argument fails but I am glad to see you are working on it. To recap here, you want to show that the parameters we have now are not a local optimum for bh production. So you must show a small change that would increase production---I would say 1% or 2% change would be appropriate, he is talking about a local optimum in parameter space. Remember he has an evolutionary process in mind and evolution finds hilltops in the fitness landscape--local maxima--not the highest mountains in sight. But I would be interested if you could think of even a 10% change that would have resulted in greater hole abundance.

Your original plan was to make gravity 100 times stronger.

Chalnoth said:
... Make G larger by a couple of orders of magnitude, and all matter that collapses will collapse into black holes...

That wasn't really very relevant. Now you want to make it twice as strong. That still wouldn't address local optimality (which is really a first order derivative or gradient thing, mathematically) but OK. You would deserve a big congratulations if you could even show an improvement from doubling the strength of gravity.

And to keep the universe from collapsing you say you want to reduce the density of each component to be half of what it is at present. As follows:
Chalnoth said:
It's not all that difficult.

Take the following situation:

1. G is larger by some factor (say two, for an example).
2. The average density of each component of the universe is smaller by the same factor.
...

You don't say what you want the expansion history to be, or what the present value of the Hubble rate is supposed to be. There's not enough detail for me to tell what happens in your universe. I do notice that different components scale differently with expansion. You just now cut the dark energy density in half (it is one component) and that density is not affected by expansion. It will be constant all the way back to the start of expansion. But the matter density is affected (as the inverse cube of the scale factor).

By reducing the density of ordinary and dark matter, you do seem to me to run the risk of getting fewer black holes (when what you want is more.)

Just isn't enough here to decide if your example would obtain more black holes. In any case it would not disprove Smolin's conjecture of local optimality, but keep trying.

It would be nice if you or anybody could find a case that would disprove it! Competent people have tried. Notably the worldfamous cosmologist Alex Vilenkin, who published a paper outlining his attempt to shoot the conjecture down. I would be delighted to see another such attempt by someone else---if they could make it stick, of course.

I suspect you do not have a counterexample (a modification that would produce a greater abundance of holes) but I encourage you to work harder on it and show more detail---particularly the expansion history, the Hubble parameter etc.
 
Last edited:
  • #128
marcus said:
Hi Chal, I suspect your argument fails but I am glad to see you are working on it. To recap here, you want to show that the parameters we have now are not a local optimum for bh production. So you must show a small change that would increase production---I would say 1% or 2% change would be appropriate, he is talking about a local optimum in parameter space.
The argument I used is independent of the scale of the change. It works as well for a 0.01% change as a 1000x change.

To work it out in more detail, I suspect you'd need some reasonably-large N-body simulations that include hydrodynamics, combined with better knowledge of the TOV limit, specifically how it scales with G. But I sincerely doubt you'd find anything but an exacerbation of these effects I noted, making even more black holes.

marcus said:
You don't say what you want the expansion history to be, or what the present value of the Hubble rate is supposed to be.
Well, true, I forgot to mention that in this hypothetical scenario, the universe would remain flat. But that could have either been clear from the context, or gleaned from my previous post, where I stated this explicitly. In that situation, the expansion history would be identical to our own universe, a fact chosen explicitly to make it easy to analyze. Linear structure formation should also progress in an identical manner to our own universe, with the differences coming in once you get to non-linear structure formation and the evolution of compact objects.

marcus said:
By reducing the density of ordinary and dark matter, you do seem to me to run the risk of getting fewer black holes (when what you want is more.)
Well, since structure formation runs in the same way as before, just with things having half the mass (due to half the original density), a naive analysis would equate each object with mass m in our universe to an object with mass m/2 in the hypothetical universe. Then we ask the question: Okay, if we take an object with mass m in the present universe that is not a black hole, would the equivalent object with mass m/2 in the hypothetical universe collapse into a black hole?

I can't answer that definitively, but I can show that it seems highly likely that for some objects, this is the case, as Chandresekhar limit scales as [tex]1/G^{3/2}[/tex], meaning that objects equivalent to those in our universe with about a factor of 1.414 lower mass than the current Chandresekhar limit would become neutron stars. Given that the physics which govern the Chandresekhar limit and the TOV limit are very, very similar (they're both based upon the pressure that degenerate matter can support), it is not unreasonable to suspect that the TOV limit behaves in a similar fashion, meaning even though the masses of the objects are lower, even lower-mass objects become black holes, so you get many more of them.
 
  • #129
Umm, if you decrease the hierarchy between gravity and the other forces, quite obviously you are going to increase black hole production. This is completely general in any local cosmological neighborhood and follows from both a semiclassical treatment and the conjectured (but pretty conservative) asymptotic darkness proposals. You can tweak and escape this conclusion by playing around with cosmological initial conditions and values (say the cc), but then you are right back to putzing around with idiosyncratic probability measures and wondering about how long and how big such and such a baby universe lasts for, poincare recurrence times and the like.

Related is the whole question of exactly how do you count black holes anyway, which arguably is just as problematic and vague.
 
Last edited:
  • #130
If Chalnoth's twidling with the g as a constant and the average mass density went through, wouldn't it only show that these particular parameters could have had a wide range and so drop out of the basket of 30 that need to be entrained to maximal black hole production? You would still have the same problem of finetuning for the remaining constants?

It is really about how a package of constants hang together. So playing two scaling factors off against each other is trivial. Getting potentially 20 or 30 to explore a mutual landscape of fitness would be a rather more multi-dimensional and non-linear affair?

Of course mass density does depend on some of those 30 parameters, like mixing angles. Can they then be subsumed into a single thing - average mass density?
 
  • #131
Apeiron, as I recall Smolin's conjecture explicitly only involves the 30-or so dimensionless parameters of the standard particle and cosmo models.
Newton's G is not part of what you get to play with. So you are right in a sense, as you suggested, G is not in the set.

You can think of the Planck units as the units in which the other stuff is expressed, changing G, or hbar, or c just changes the units, not the physics. The real stuff, like the mass of the electron, is expressed as a ratio to the Planck mass (that ratio is one of the dimensionless Thirty).

John Baez has an online essay on the dimensionless parameters, as I recall. You've probably seen it. If not, say and I could get a link.
 
  • #132
In case anyone else would like to follow Chalnoth's and Haelfix's example and try your hand at disproving Smolin's conjecture of bh optimality, I will link this article by Frank Wilczek, Martin Rees, Max Tegmark, and Tony Aguirre, that lists the 31 basic fundamental dimensionlesss constants you get to play with:
http://arxiv.org/abs/astro-ph/0511774
Dimensionless constants, cosmology and other dark matters
Max Tegmark (MIT), Anthony Aguirre (UCSC), Martin J Rees (Cambridge), Frank Wilczek (MIT)
29 pages, 13 figs; Phys.Rev.D73:023505,2006
(Submitted on 29 Nov 2005)
"We identify 31 dimensionless physical constants required by particle physics and cosmology, and emphasize that both microphysical constraints and selection effects might help elucidate their origin. Axion cosmology provides an instructive example, in which these two kinds of arguments must both be taken into account, and work well together. If a Peccei-Quinn phase transition occurred before or during inflation, then the axion dark matter density will vary from place to place with a probability distribution. By calculating the net dark matter halo formation rate as a function of all four relevant cosmological parameters and assessing other constraints, we find that this probability distribution, computed at stable solar systems, is arguably peaked near the observed dark matter density. If cosmologically relevant WIMP dark matter is discovered, then one naturally expects comparable densities of WIMPs and axions, making it important to follow up with precision measurements to determine whether WIMPs account for all of the dark matter or merely part of it."

It would seem a fairly reliable paper, Wilczek is Nobel laureate, Rees is the UK Astronomer Royal.
 
Last edited:
  • #133
You are free of course to redefine G's role as a constant all you want, but the ratio between physics scales is something that is tuned as they say for life. So something like the ratio between the gravitational force and the strong force is a physical quantity that can be adjusted in principle.

This actually shows up when you study things like large extra dimensions, where the hierarchy scale is reduced drastically and where you might expect to see microscopic black hole production in accelerators. There are good reasons to believe this is not the case in our world (at least the original versions of ADD) but people do take it seriously in phenomenology and vast literatures exist on the subject.

For CNS, you can then start to ask questions like 'Do those microscopic black holes count in the fecundity measure?" "Do they lead to larger relic densities of stellar black holes over the age of the universe?", and so forth.
 
  • #134
In anyone would like to take the conjecture seriously and try to prove it false, a good source is http://arxiv.org/abs/hep-th/0407213

This specifies black holes formation "from massive stars".
The conjecture concerns the standard dimensionless constants being optimal for bh formation from massive stars.

See page 31-33.

An earlier post suggested changing the "hierarchy of forces" to make gravity stronger compared to others---say the strong force. Smolin appears to have thought of that in his first papers on this in the 1990s. This amounts to decreasing [tex]\alpha_{strong}[/tex].
What he observes is that this would destabilize nuclei which participate or expedite massive star formation. In other words it appears to be counterproductive to change the "hierarchy of forces" in the way suggested in the previous post.

You might want to read pages 31-33 rather than rely hearsay. It comprises sections called "Successes of the Theory", "Previous Objections", and "Why a Single Heavy Pulsar Could Refute [the optimality conjecture]"
 
Last edited:
  • #135
Eh? If the ratio between alpha strong and alpha EM/weak remains the same, nuclei will not be destabilized nor will nuclear fusion processes be altered. You would have to change gravities force by some 15+ orders of magnitude to have any effect on that whatsoever.
 
  • #136
You are acting like you haven't read the paper. He deals with that case as well (where the weak coupling constant is decreased.)

I don't have time to transcribe every case here. So please read the paper. If you want to seriously address this conjecture and try to refute it, then I suggest you read and think about what he has to say instead of expecting me to transcribe it line by line.

The paper I cited has references to earlier papers that go over in more detail what seems to go wrong playing with the coupling constants as you suggest, and much else besides.

Trying to disprove the conjecture could be a good research project for someone (with the ability to address it effectively). If anyone could disprove it, the result would presumably be publishable, and would make them well-known.
Smolin cites previous attempts to do the sort of thing you are suggesting---find a way of adjusting some of the 30-odd parameters that would have resulted in more holes.
 
  • #137
Marcus, try to understand that I am not talking about changing the strength of the strong force, the electromagnetic force or the weak force. Merely Newtons constant size relative to them (keeping the other three's interrelations fixed) since that was what was brought up earlier in the thread. The paper you keep pointing out is talking about an entirely different scenario (namely varying details of the nuclear forces, say the mass difference between neutrons and protons and so forth) it explicitly does not deal with gravities strength at all!

Anyway, I generically disagree with the paper (and in fact pretty much the entire literature dealing with probability distributions in multiverses, whether they be anthropic, CNS, Stringy or anything else). Incidentally someone raised this exact point about G to Lee when he gave this lecture several years ago at a conference I was in.. He then gave a rather foggy argument that I forget exactly, based on galactic physics and stellar evolution (note this conveniently sidesteps the microscopic bh production point)

In fact, I completely agree with him that stellar black hole production rates are debatable. For instance, the most obvious thing that occurs when you change gravities strength (fixing everything else) is to change the details of the HR diagram. Stars will tend to live shorter lifetimes. When stars live shorter lifetimes, the dynamics and timescales of molecular cloud formation and the ensuing stellar formation are altered. Why? B/c the clouds rely on ionizing radiation from supernovae to seed the conditions for future star births. Since, the dynamics of supernovae are also altered when you mess with G it requires numerical simulations to handle quantitatively. In short, a complicated mess with lots of competing feedbacks that no one understands sufficiently well. Having said that, you get so much more clumping and seeds, that its a little hard to see the feedbacks competiting with change of a couple orders of magnitude to G, but anyway I won't debate this part of the hypothesis.

Still generically you do expect a lot of micro black hole production with a reduced hierarchy and that's conveniently ignored in the hypothesis and gets into exactly how you measure the distribution of black holes in the first place. Do mergers count as one or two? What about a single black hole with an event horizon the size of our visible universe, how do we count that. Is their a mass cutoff that we are considering here (keeping in mind that different universes might have different mass hierarchies), what timescales are we talking about? Do we correct for recurrence times. And so on and so forth. Its essentially the same sort of nonsense that people argue about when dealing with anthropic multiverse measures.
 
Last edited:
  • #138
Haelfix your speculation and reasoning is interesting but unrelated to Smolin's conjecture as stated for example in the paper I just cited.
There he conjectures that the dimensionless parameters are fine tuned for astrophysical black holes----those produced by collapse of massive stars.


So varying G is irrelevant (although Chalnoth mentioned it) because
G is not dimensionless.

And talking about microscopic black holes is irrelevant.

What somebody needs to do is come to grips with the actual conjecture and find some reason to dismiss it. That is, make a serious effort to refute, as I take it Vilenkin did and also apparently Joe Silk in 1997. Silk was at UC Berkeley then, later moved to Cambridge. He is one of the most eminent cosmologists in the world. His attack on Smolin's conjecture was apparently published in Science. But it didn't cut the mustard. So Vilenkin tried a few years later, after Smolin's 2004 essay appeared in a book of scholarly papers published by Cambridge U.P.

==============
EDIT to reply Chalnoth.
Dear Chalnoth, the conjecture is the conjecture as stated. It concerns astrophysical black holes resulting from a certain process---the collapse of massive stars. What you are proposing to do (or Haelfix, if he still considers that an appropriate reaction) is not science. In science, you do not alter the hypothesis in order to reject it. :biggrin:

No one is arbitrarily "throwing out" micro holes, because they were never included in the conjecture in the first place. :rofl:
 
Last edited:
  • #139
marcus said:
What somebody needs to do is come to grips with the actual conjecture and find some reason to dismiss it.
Actually, Haelfix did just that through reference to microscopic black holes. Because you can't just arbitrarily throw those out without reason to do so.
 
  • #140
marcus said:
Haelfix your speculation and reasoning is interesting but unrelated to Smolin's conjecture as stated for example in the paper I just cited.
There he conjectures that the dimensionless parameters are fine tuned for astrophysical black holes----those produced by collapse of massive stars....

What somebody needs to do is come to grips with the actual conjecture and find some reason to dismiss it. ...

==============
EDIT to reply Chalnoth.
Dear Chalnoth, the conjecture is the conjecture as stated. It concerns astrophysical black holes resulting from a certain process---the collapse of massive stars. What you are proposing to do (or Haelfix, if he still considers that an appropriate reaction) is not science. In science, you do not alter the hypothesis in order to reject it. :biggrin:

So far no refutation has been sustained, either in the playpen or outside in the real intellectual world (by competent experts who have tried).

It may very well be wrong. If someone could show that they could gain considerably in reputation. (I for one would be delighted.)

As explicitly stated, this is about dimensionless parameters being at a local max for astro holes. A good source is the 2004 paper. But there is also a 2006 paper, and a couple of scholarly books with chapters devoted to it
Bernard Carr's Universe or Multiverse? (Cambridge 2007)
Rudy Vaas Beyond the Big Bang (Springer 2009)
The 2006 paper is
http://arxiv.org/abs/hep-th/0612185
The 2004 paper, in case someone didn't get the link earlier, and hasn't read the relevant parts yet, is
http://arxiv.org/abs/hep-th/0407213

In everything I've seen the conjecture has been explicit and consistent with what I just said (dimensionless, astrophysical). It rests on quite a bit of discussion which anyone can read if they wish.
Naive to think that "you can't" propose such a conjecture.

Another book is in preparation---Smolin and Unger---about this and related topics. Time, for example.

I see that Springer has finally sent out reviewer copies of "Beyond the Big Bang". Here is the table of contents.
http://www.springer.com/astronomy/general+relativity/book/978-3-540-71422-4?detailsPage=toc
Apparently won't be in the bookstores until December 2009.
 
Last edited by a moderator:

Similar threads

  • Cosmology
Replies
3
Views
843
Replies
20
Views
2K
  • Cosmology
Replies
28
Views
2K
Replies
5
Views
1K
Replies
1
Views
1K
Replies
10
Views
178
Replies
0
Views
169
Replies
1
Views
959
Replies
4
Views
2K
  • Cosmology
Replies
24
Views
2K
Back
Top