Insights Answering Mermin’s Challenge with the Relativity Principle

  • #51
RUTA said:
Go to this Insight and you'll see the solution I'm talking about.

Thank you for the pointer, it's much better to talk about a specific model.

RUTA said:
the differential equation I am solving

Do you mean equation (18) in the Insight?
 
Physics news on Phys.org
  • #52
PeterDonis said:
Do you mean equation (18) in the Insight?
Yes
 
  • #53
RUTA said:
Yes

Ok, then yes, I agree you can pick ##a(0) \neq 0## in your solution, and, as far as I can tell, that also makes ##\dot{a}##, ##\ddot{a}##, and ##\rho## finite at ##t = 0## (basically because you have substituted ##t + B## for ##t##, so all of the values at ##t = 0## are proportional to some power of ##B## instead of diverging).

However, this model is obviously extensible to negative values of ##t##, and when you reach ##t = - B##, your model has ##a = 0## and ##\dot{a}##, ##\ddot{a}##, and ##\rho## all infinite. So your model is not a different model from the standard one, it's just a shift of the ##t## coordinate by ##B## (strictly speaking there is a rescaling of ##t## as well). Considering the patch ##t \ge 0## in this model is simply equivalent to only considering the patch ##t \ge B## in the standard Einstein-de Sitter model. This is not a model in which the singularity theorems are violated; it's just a model in which you have artificially restricted attention to a particular patch.
 
  • #54
PeterDonis said:
Ok, then yes, I agree you can pick ##a(0) \neq 0## in your solution, and, as far as I can tell, that also makes ##\dot{a}##, ##\ddot{a}##, and ##\rho## finite at ##t = 0## (basically because you have substituted ##t + B## for ##t##, so all of the values at ##t = 0## are proportional to some power of ##B## instead of diverging).

However, this model is obviously extensible to negative values of ##t##, and when you reach ##t = - B##, your model has ##a = 0## and ##\dot{a}##, ##\ddot{a}##, and ##\rho## all infinite. So your model is not a different model from the standard one, it's just a shift of the ##t## coordinate by ##B## (strictly speaking there is a rescaling of ##t## as well). Considering the patch ##t \ge 0## in this model is simply equivalent to only considering the patch ##t \ge B## in the standard Einstein-de Sitter model. This is not a model in which the singularity theorems are violated; it's just a model in which you have artificially restricted attention to a particular patch.
Right, the singularity theorem is not violated because it is still true that there are timelike and null geodesics with finite affine parameter lengths into the past (finite proper time). But, all the observables and physical parameters are finite (except meaningless ones like the volume of spatial hypersurfaces of homogeneity). It is absolutely "artificial" in that there is no dynamical reason whatsoever for not extending the solution into the past (with negative values of t) all the way to ##a = 0##. But, in the 4D global self-consistent view, there is no reason to do that. You only need as much of the spacetime manifold as necessary to account for your observations. I don't foresee a need for ##\rho = \infty##, i.e., ##a = 0##, but if we ever do need such ##\infty##, then you can include it at that point.
 
  • #55
RUTA said:
all the observables and physical parameters are finite

Not at ##t = - B##. There the density ##\rho## is infinite.

RUTA said:
in the 4D global self-consistent view, there is no reason to do that

Yes, there is, because in the 4D global self-consistent view, the manifold is its maximal analytic extension. Arbitrarily cutting it off at some point prior to that makes no sense on that view. If you think it does because of some "adynamical constraint", what is that constraint? It can't be "because RUTA prefers to cut off the solution at ##t = 0## in his model".

RUTA said:
You only need as much of the spacetime manifold as necessary to account for your observations.

Not if you want your model to make testable predictions about observations that haven't been made yet.
 
  • #56
PeterDonis said:
Not at ##t = - B##. There the density ##\rho## is infinite.

Yes, there is, because in the 4D global self-consistent view, the manifold is its maximal analytic extension. Arbitrarily cutting it off at some point prior to that makes no sense on that view. If you think it does because of some "adynamical constraint", what is that constraint? It can't be "because RUTA prefers to cut off the solution at ##t = 0## in his model".

Not if you want your model to make testable predictions about observations that haven't been made yet.
As I explained in the Insight, EEs of GR constitute the constraint. Any solution of EEs that maps onto what you observe or could conceivably observe is fair game. There is nothing in GR that says you must include extensions of M beyond what maps to empirically verifiable results. But, if you have a prediction based on ##a = 0## and ##\rho = \infty##, by all means include that region.
 
  • #57
RUTA said:
As I explained in the Insight, EEs of GR constitute the constraint.

That doesn't explain why you would cut off a solution of the EFE short of its maximal analytic extension.

RUTA said:
There is nothing in GR that says you must include extensions of M beyond what maps to empirically verifiable results.

Again, you have to do this if you want your model to make testable predictions about observations that haven't been made yet.

Also, the position you appear to be taking seems highly implausible on your own "blockworld" viewpoint. Why would a "blockworld" just suddenly have an "edge" for no reason? It seems much more reasonable to expect any "blockworld" to extend as far as the math says it can.
 
  • #58
PeterDonis said:
That doesn't explain why you would cut off a solution of the EFE short of its maximal analytic extension.

Again, you have to do this if you want your model to make testable predictions about observations that haven't been made yet.

Also, the position you appear to be taking seems highly implausible on your own "blockworld" viewpoint. Why would a "blockworld" just suddenly have an "edge" for no reason? It seems much more reasonable to expect any "blockworld" to extend as far as the math says it can.
Look again at the partial parabola for the trajectory of a ball with ##y(0) = 3##. We don’t include the mathematical extension into negative times demanding therefore we must include ##y = 0##. Why? Because we don’t believe there can be any empirical evidence of that fact. So, in adynamical thinking the onus is on you to produce a prediction with empirical evidence showing you need to include ##a = 0## with ##\rho = \infty##. We can then do the experiment and see if your prediction is verified. If so, according to your theory, we need to include that region. There is no reason to include mathematics in physics unless that mathematics leads to empirically verifiable predictions. So, what is your prediction?
 
  • #59
RUTA said:
They're looking for past extendability and found it. Why were they looking for that? Because they were thinking dynamically. Here is an analogy.

Set up the differential equations in y(t) and x(t) at the surface of Earth (a = -g, etc.). Then ask for the trajectory of a thrown baseball. You're happy not to past extend the solution beyond the throw or future extend into the ground because you have a causal reason not to do so. But, the solution is nonetheless a solution without those extensions. Same for EEs with no past extension beyond a(0) and a choice of a(0) not equal to zero. Why are you not satisfied with that being the solution describing our universe? There's nothing in the data that would ever force us to choose a(0) = 0 singular. The problem is that the initial condition isn't explained as expected in a dynamical explanation. All we need in 4D is self-consistency, i.e., we only have to set a(0) small enough to account for the data. Maybe someday we'll have gravitational waves from beyond the CMB and we'll be able to push a(0) back to an initial lattice spacing approaching the Planck length. But, we'll never have to go to a singularity.
I am missing something very basic here. Take for example ##y''=2## on the interval ##[0,2]## with ##y(1)=1## and ##y(2)=4##. The only solution is ##y(x)=x^2##. How do you make ##y(0)## not equal to zero?
 
  • #60
martinbn said:
I am missing something very basic here. Take for example ##y''=2## on the interval ##[0,2]## with ##y(1)=1## and ##y(2)=4##. The only solution is ##y(x)=x^2##. How do you make ##y(0)## not equal to zero?
There are any number of reasons you might want to use ##y = 0##, but you have to come up the reason to do so. You don’t use the math to dictate the use of ##y = 0##. What if I want to use the math for throwing a ball? I don’t use ##y = 0## because I believe it is not possible to find empirical verification of that fact. Again, the empirically verifiable physics drives what you use of the math, not the converse. So, again, what is your prediction requiring I keep ##a = 0## with ##\rho = \infty##? Produce that prediction and its empirical verification and we’ll know we have to keep that region.
 
  • #61
My point was, that in the given example if I need the value ##y(0)##, I don't have the freedom the choose it, it is a consequence of the equation and the other values. It seemed to me that you were saying that in the cosmological model you can just addjast that value?
 
  • #62
The key phrase there is "if I need the value of ##y = 0##" (the origin of the time parameterization is irrelevant of course). So, what dictates your need? Empirical results, not math results. Same with ##a = 0## with ##\rho = \infty##. Do you have an empirically verifiable prediction requiring we keep ##a = 0## with ##\rho = \infty##? If so, we'll check it and if you're right, we'll need to keep that region. Otherwise, why would we keep it?
 
  • #63
RUTA said:
The key phrase there is "if I need the value of ##y = 0##" (the origin of the time parameterization is irrelevant of course). So, what dictates your need? Empirical results, not math results. Same with ##a = 0## with ##\rho = \infty##. Do you have an empirically verifiable prediction requiring we keep ##a = 0## with ##\rho = \infty##? If so, we'll check it and if you're right, we'll need to keep that region. Otherwise, why would we keep it?
I suppose I missunderstood. I thought you were claiming that at ##t=0##, the quantity ##a## has to have a value, and since the value zero is problematic you don't use that value, but you use a different value. Of course not using any value and saying that the solution is valid only vor ##t>0## is fine, and that is what is done in GR anyway.
 
  • #64
martinbn said:
... that is what is done in GR anyway.
Exactly, we’re using ##\Lambda\text{CDM}## successfully to make predictions relying on conditions even before decoupling (anisotropies in CMB power spectrum depend on pre-decoupling oscillations). No one is saying, “Well, if you extrapolate that cosmology model backwards in time far enough, you get ##\rho = \infty##, so I guess we have to stop using it otherwise.” That’s silly. Again and again, as I keep showing, adynamical thinking vindicates what we’re doing in modern physics, revealing its coherence and integrity. The Insight here is refuting Smolin et al. who believe “quantum mechanics is wrong because it’s incomplete.” Modern physics isn’t wrong or incomplete, it’s true it isn’t finished (we need to connect quantum and GR), but what we have is totally right. All its mysteries can be attributed to our dynamical bias (you don’t have to attribute them to that, but you can).
 
  • #65
RUTA said:
Why were they looking for that? Because they were thinking dynamically. Here is an analogy.

Set up the differential equations in y(t) and x(t) at the surface of Earth (a = -g, etc.). Then ask for the trajectory of a thrown baseball. You're happy not to past extend the solution beyond the throw or future extend into the ground because you have a causal reason not to do so. But, the solution is nonetheless a solution without those extensions.
Why should we be happy with this? Only because we know that before the moment of the throw we have a completely different physical situation, namely a baseball in a hand. There was a physical act of creation of the flying baseball which tells us that it makes no sense to apply the physics valid after the act of creation to the situation before. So, the creationist is happy with the trajectory of the world as described in the Holy Scripture and is happy not to extend the actual laws of physics past the moment of creation.
RUTA said:
Same for EEs with no past extension beyond a(0) and a choice of a(0) not equal to zero. Why are you not satisfied with that being the solution describing our universe? There's nothing in the data that would ever force us to choose a(0) = 0 singular.
Because we have no evidence for any act of creation of different physics for any a(0)>0. The creationist has such evidence - in his Holy Book. We don't have. Our Holy Script
RUTA said:
The problem is that the initial condition isn't explained as expected in a dynamical explanation. All we need in 4D is self-consistency, i.e., we only have to set a(0) small enough to account for the data.
In this case, your approach is even worse than I thought when I started my creationist analogy. The subset of our FLRW universe restricted to the last 5000 years is (or at least we hope so) self-consistent. There is nothing in the data which forces us to go beyond the 5000 years.

In classical "dynamical thinking", there is a lot that forces us to look into the past. Namely, there is causality, with Reichenbach's common cause principle (the one we have to throw away to save relativity) which forces us to search for common causes for all the correlations we see around us, for all those dinosaur bones and so on. But with the rejection of the common cause, they have to be simply ignored. Is there anything inconsistent with these bones? Not. Thus, there is no problem.

Science is what it is because the scientific method identifies open problems of existing theories. Without trying to solve open problems, scientists could restrict themselves to the teaching of the Holy Scriptures of Euclid or Ptolemaeus, maybe Newton, maybe Einstein, whatever was the actual state of science when scientists no longer cared about the usual problems of "dynamical thinking" and restricted themselves to the consistency of the Scriptures.
RUTA said:
Maybe someday we'll have gravitational waves from beyond the CMB and we'll be able to push a(0) back to an initial lattice spacing approaching the Planck length. But, we'll never have to go to a singularity.
This is not how it works. First, we have to compute something nontrivial about those gravitational waves. This initial step already requires to go far beyond what is accessible to data now. Any attempts to build devices that could test that particular theory of the gravitational waves in CMBR would have to be based on this application of GR (together with similar applications of alternatives of GR, or some sort of null hypothesis if there are none).
 
  • #66
It's always true, even with dynamical thinking, that reality could have begun 10 min ago with all the signs of a past beyond that. You can choose to model reality that way, but I'm not suggesting we do so. I'm saying we should take the model as far back as necessary to account for our observations. With anisotropies in the CMB that's before decoupling. @Elias1960, you have a strong dynamical bias, so you should pursue a model consistent with that. Again, you're not in any way refuting the point I'm making by simply espousing your dynamical bias.
 
  • #67
RUTA said:
We don’t include the mathematical extension into negative times demanding therefore we must include ##y = 0##. Why?

Because we already know there is a constraint: the ball wasn't freely flying at negative times, it was sitting on the ground. So we don't extend the parabolic trajectory to negative times because we know it doesn't apply. Instead, we join that trajectory to a different trajectory for negative times.

Now, suppose that we weren't watching the ball at all at negative times and had no empirical evidence whatever of its trajectory then. But we do know that the surface of the Earth is there and that the parabolic trajectory intersects that surface at ##t = 0##. How would we model the ball? Would we just throw up our hands and say, well, we don't have any evidence at negative times so we'll just cut off our model at ##t = 0## and stop there? Or would we exercise common sense and predict that, at negative times, the ball is sitting on the surface of the Earth, and someone threw it upwards at time ##t = 0##, and extend our model accordingly?

As far as I can tell, you prefer the first alternative and I (and others, it appears) prefer the second. Can I give you a logical proof that you must use the second alternative? No. But I can tell you that the first alternative makes no sense to me, and I suspect it makes no sense to a lot of other people.

RUTA said:
in adynamical thinking the onus is on you to produce a prediction with empirical evidence showing you need to include ##a = 0## with ##\rho = \infty##.

I haven't made any such prediction. I don't have a problem with looking for a solution that does not have ##\rho = \infty## at ##t = 0##. And we have such solutions: inflationary models do not require ##\rho = \infty## at ##t = 0##. Eternal inflation is a possibility. Other possibilities have been suggested as well. If your position is that everybody except you is stuck in a rut thinking we have to have ##\rho = \infty## at ##t = 0##, then I think you are ignoring a lot of work being done in cosmology.

OTOH, what I do have a problem with is saying, oh, well, we don't have any empirical evidence for times before the hot, dense, rapidly expanding state that in inflationary models occurs at the end of inflation, so we'll just cut off the model there and pretend nothing existed before that at all, it just suddenly popped into existence for no reason. That, to me, is not valid adynamical thinking. Valid adynamical thinking, to me, would be that the 4-D spacetime geometry, which does not "evolve" but just "is", should extend to wherever its "natural" endpoint is. The most natural thing would be for it to have no boundary at all, which means that if your model has a boundary in it, which it certainly does if you arbitrarily cut off the model the way you are describing, your model is obviously incomplete. Unless you can show some valid adynamical constraint that requires there to be a boundary at that particular place in the 4-D geometry. I have not seen any such argument from you.

RUTA said:
I'm saying we should take the model as far back as necessary to account for our observations.

But why should we stop there? Why should our observations be the criterion for where the 4-D spacetime geometry of the universe has a boundary?

RUTA said:
There is no reason to include mathematics in physics unless that mathematics leads to empirically verifiable predictions.

Inflationary models, which carry the 4-D spacetime geometry of the universe back past the earliest point we can currently observe directly, do make empirically verifiable predictions. But those models were developed before anyone knew that they would be able to make such predictions. You seem to be saying nobody should bother working on any model unless it covers a domain we already have empirical data from. That doesn't make sense to me; if we did that we would never make any predictions about observations we haven't made yet. But science progresses by making predictions about observations we haven't made yet.

RUTA said:
No one is saying, “Well, if you extrapolate that cosmology model backwards in time far enough, you get ##\rho = \infty##, so I guess we have to stop using it otherwise.”

You're right that no one is saying that. But that's because no one is extrapolating the model backwards in time to ##\rho = \infty## in the first place. Everyone appears to me to be looking at how to extend our best current model in ways that don't require ##\rho = \infty## anywhere. Nobody appears to me to be saying, "oh, well, we'll just have to arbitrarily cut off the model at the earliest point where we can make observations, and say that adynamical thinking prevents us from going further until we have more evidence".
 
  • #68
RUTA said:
It's always true, even with dynamical thinking, that reality could have begun 10 min ago with all the signs of a past beyond that. You can choose to model reality that way, but I'm not suggesting we do so. I'm saying we should take the model as far back as necessary to account for our observations.
What means "to account for our observations"? This is clear and obvious for me, given my "dynamical thinking", and my insistence on Reichenbach's common cause principle which defines what is a reasonable explanation. You reject both and rely on consistency only.

I argue that this restriction to consistency only is inconsistent, and in this particular post in conflict with "we should take the model as far back as necessary to account for our observations", because the consistency of our observation requires essentially nothing of that sort.
 
  • #69
PeterDonis said:
Because we already know there is a constraint: the ball wasn't freely flying at negative times, it was sitting on the ground. So we don't extend the parabolic trajectory to negative times because we know it doesn't apply. Instead, we join that trajectory to a different trajectory for negative times.

Now, suppose that we weren't watching the ball at all at negative times and had no empirical evidence whatever of its trajectory then. But we do know that the surface of the Earth is there and that the parabolic trajectory intersects that surface at ##t = 0##. How would we model the ball? Would we just throw up our hands and say, well, we don't have any evidence at negative times so we'll just cut off our model at ##t = 0## and stop there? Or would we exercise common sense and predict that, at negative times, the ball is sitting on the surface of the Earth, and someone threw it upwards at time ##t = 0##, and extend our model accordingly?

As far as I can tell, you prefer the first alternative and I (and others, it appears) prefer the second. Can I give you a logical proof that you must use the second alternative? No. But I can tell you that the first alternative makes no sense to me, and I suspect it makes no sense to a lot of other people.

The difference here is that there is an external context for the ball's trajectory where there is no such external context for cosmology. You're tacitly using that external context to infer empirical results. Again, in physics there must be some empirical rational for using the mathematics. So, yes, without an external context and a physical motivation otherwise, what would motivate you to include ##a = 0## with ##\rho = \infty## in your model? The burden is on you to motivate the use of the math. That is precisely what we're doing now with ##\Lambda\text{CDM}##, i.e., we're using it where it can account for observations. If someone used ##a = 0## with ##\rho = \infty## to make a testable empirical prediction and that prediction was verified, then we would include it in our model. It's that simple. You're just not giving me any empirical reason to include that region, so why would I? There is something that is driving you to believe ##a = 0## with ##\rho = \infty## should be included despite the lack of empirical motivation. Can you articulate that motive?
 
  • #70
Elias1960 said:
What means "to account for our observations"? This is clear and obvious for me, given my "dynamical thinking", and my insistence on Reichenbach's common cause principle which defines what is a reasonable explanation. You reject both and rely on consistency only.
To account for the Planck distribution of the CMB or the anisotropies in its power spectrum, for example. The self-consistency I'm talking about is in EEs. Did you read my GR Insight on that? And I do not reject dynamical explanation. I use it all the time.

My claim is that if you view adynamical constraints as fundamental to dynamical laws, then many mysteries of modern physics, such as entanglement per the Bell states, disappear. You have said nothing to refute that point. All you have done is espouse your dynamical bias in response. If I had claimed that you MUST use constraints to dispel the mysteries of modern physics, then your replies would be relevant. But, I never made that claim. To refute my claim, you would have to accept my premise that constraints are explanatory and show how they fail to explain something that I claim they explain. So, for example, show how my constraint, conservation per NPRF, cannot explain conservation per the Bell states, conceding first that constraints are explanatory. I don't see how that's possible, but I'll let you try. You haven't even made an effort wrt cosmology, all you've done is espouse your dynamical bias there.
 
  • #71
RUTA said:
The difference here is that there is an external context for the ball's trajectory where there is no such external context for cosmology.

I don't see the difference. In both cases you have a model and an obvious way to extend it. The only difference is that the ball is not the entire universe, but if that actually made a difference it would mean, by your logic, that we can never extrapolate anything for the entire universe beyond what we have already observed. Which, as I have said, is not how progress has been made in science.

RUTA said:
without an external context and a physical motivation otherwise, what would motivate you to include ##a = 0## with ##\rho = \infty## in your model?

You're not even reading what I'm saying. I have never said we have to do that. You are talking as if this is the only possible extension of any cosmological model beyond what we have already observed. It isn't.

RUTA said:
That is precisely what we're doing now with ##\Lambda\text{CDM}##, i.e., we're using it where it can account for observations.

We are also looking at extending ##\Lambda\text{CDM}##, for example with inflation models. By your logic, nobody should be bothering to do that unless and until we get some actual direct observations from an inflationary epoch.

RUTA said:
There is something that is driving you to believe ##a = 0## with ##\rho = \infty## should be included

I have never made any such claim. I don't know who you think you are responding to with these repeated references to ##\rho = \infty##, but it isn't me. You need to read what I'm actually saying instead of putting words in my mouth.
 
  • #72
RUTA said:
To refute my claim, you would have to accept my premise that constraints are explanatory and show how they fail to explain something that I claim they explain. So, for example, show how my constraint, conservation per NPRF, cannot explain conservation per the Bell states, conceding first that constraints are explanatory. I don't see how that's possible, but I'll let you try. You haven't even made an effort wrt cosmology, all you've done is espouse your dynamical bias there.
No, I do not plan to accept that constraints are explanatory, I'm sure they are not and have arguments for this.

I start from causality, with Reichenbach's common cause principle, and it follows that every correlation which does not have a causal explanation via a direct causal influence of a common cause is an open, unexplained correlation. I argue that giving up Reichenbach's common cause is useful for the tobacco lobby because there is no longer a necessity to find common causes for lung cancer and smoking which differ from the "smoking causes lung cancer" explanation. In other words, to give it up is far too absurd to be taken seriously.
RUTA said:
My claim is that if you view adynamical constraints as fundamental to dynamical laws, then many mysteries of modern physics, such as entanglement per the Bell states, disappear.
And I counter that nothing happens to the mystery of entanglement, they remain mysteries and have to remain mysteries until you accept non-mystical faster than light causal influences as an explanation. This is a variant of Bell's theorem, which uses Reichenbach's common cause principle (instead of EPR realism), and theorems will not go away. Once you claim that your constraints explain the Bell correlations, you have to accept something as an explanation which is in conflict with Reichenbach's common cause, and this is simply absurd (at least for scientists, astrologers and other mystics will disagree).

A correlation is unexplained if it has no common cause explanation, point. If you disagree, explain how to handle a claim of the tobacco industry that lung cancer correlations do not need any common cause explanation.
RUTA said:
And I do not reject dynamical explanation. I use it all the time.
This is, in fact, a variant of a quite common feature of Bell theorem discussions. The defenders of relativity against a preferred frame question almost everything (causality, realism, even logic) but continue to apply the same principles they reject in the Bell discussion without any hesitation elsewhere.

It is this typical appearance of double standards that I try to attack with my tobacco industry, astrology, and creationist analogies.
 
  • #73
PeterDonis said:
I don't see the difference. In both cases you have a model and an obvious way to extend it. The only difference is that the ball is not the entire universe, but if that actually made a difference it would mean, by your logic, that we can never extrapolate anything for the entire universe beyond what we have already observed. Which, as I have said, is not how progress has been made in science.

You're not even reading what I'm saying. I have never said we have to do that. You are talking as if this is the only possible extension of any cosmological model beyond what we have already observed. It isn't.

We are also looking at extending ##\Lambda\text{CDM}##, for example with inflation models. By your logic, nobody should be bothering to do that unless and until we get some actual direct observations from an inflationary epoch.

I have never made any such claim. I don't know who you think you are responding to with these repeated references to ##\rho = \infty##, but it isn't me. You need to read what I'm actually saying instead of putting words in my mouth.
You're not responding to what I'm saying. I never said you shouldn't explore the observable consequences of pushing the model back in time. That's exactly what was done to make the predictions of anisotropies in the CMB power spectrum many years before we made the observations. But, and this IS what I'm saying, you let the physics dictate that extrapolation, not the math. Again, the only problematic region is ##\rho = \infty##, so that's why I'm asking what physics you believe justifies me keeping that region. And you keep agreeing with me that we should push back farther into time, which does not answer my question about the only problematic region. Wald and Elias1960 are clear about why they believe we are forced to include ##a = 0## with ##\rho = \infty## in M -- it's pathological from a dynamical perspective not to do so. But, from an adynamical perspective it's perfectly reasonable to include only that region of M that you believe can or can conceivably render empirical results. This doesn't rule out the exploration of theories like inflation at all, regardless of what motivates them. They constitute exploration of alternate cosmology models, which is of course a perfectly reasonable thing to do. If the new model makes a prediction that disagrees with the current best cosmology model in some respect while agreeing with all currently available data that the current model gets right, and that prediction vindicates the new model, then the new model wins. There is nothing in this process that says we have to accept empirically unmotivated mathematical extrapolations, i.e., those that cannot or cannot conceivably render empirical results. So, do you believe such empirically unmotivated extrapolations are required? If so, why?
 
  • #74
Elias1960 said:
No, I do not plan to accept that constraints are explanatory, I'm sure they are not and have arguments for this.
Fine, but that does not in any way refute my point. Do you understand that fact?

Elias1960 said:
It is this typical appearance of double standards that I try to attack with my tobacco industry, astrology, and creationist analogies.
There is no double standard here. I told you constraint-based explanation does not rule out causal explanation in my view. Indeed, the vast majority of our experience can be easily explained via causal mechanisms. I never said otherwise, I'm simply showing how everything can be explained self-consistently by assuming adynamical constraints are fundamental. If you have a dynamical counterpart, do what I'm doing, i.e., publish papers explaining the idea and use it to explain experimental results (which involves fitting data and comparing to other fitting techniques), present at conferences, write a book with a legit academic press, etc. That's the academic game.
 
  • #75
RUTA said:
I never said you shouldn't explore the observable consequences of pushing the model back in time. That's exactly what was done to make the predictions of anisotropies in the CMB power spectrum many years before we made the observations. But, and this IS what I'm saying, you let the physics dictate that extrapolation, not the math.

To me, "let the physics dictate the extrapolation" means "explore the observable consequences of pushing the model back in time". So I don't see the distinction you are making here.

RUTA said:
the only problematic region is ##\rho = \infty##, so that's why I'm asking what physics you believe justifies me keeping that region

And, as I have said repeatedly now, I have never made any claim that there is justification for keeping ##\rho = \infty##. So I don't see why you keep asking me to justify a claim that I have never made.

RUTA said:
you keep agreeing with me that we should push back farther into time

But I have never claimed that "push farther back into time" requires including ##\rho = \infty##. In fact I have explicitly said the opposite, when I pointed out that inflation models do not require ##\rho = \infty## anywhere and that eternal inflation models do not have it anywhere.

RUTA said:
Wald and Elias1960 are clear

And if you want to ask @Elias1960 to justify his viewpoint, or email Wald to ask him to justify his, that's fine. But that doesn't explain why you keep asking me to justify a claim that I have never made.
 
  • #76
RUTA said:
from an adynamical perspective it's perfectly reasonable to include only that region of M that you believe can or can conceivably render empirical results

Why? Why should spacetime just suddenly end at the point where our ability to observe stops?

For example, consider Schwarzschild spacetime at and inside the horizon. This region is in principle unobservable from outside the horizon. Are you saying we should arbitrarily cut off our models of black holes just a smidgen above the horizon?

Note that I am not saying that "dynamics" requires us to continue spacetime. I am considering spacetime just like you are in your blockworld viewpoint, as a 4-D geometry that doesn't change or evolve, it just is. I'm asking for an adynamical reason why 4-D spacetime should just suddenly end, and "because that's all we can observe" doesn't seem like a valid one to me.
 
  • #77
RUTA said:
But, from an adynamical perspective it's perfectly reasonable to include only that region of M that you believe can or can conceivably render empirical results. ...There is nothing in this process that says we have to accept empirically unmotivated mathematical extrapolations, i.e., those that cannot or cannot conceivably render empirical results. So, do you believe such empirically unmotivated extrapolations are required? If so, why?
The issue here has, I think, nothing to do with dynamical vs. adynamical thinking, it is about something completely different.

The question is theory development beyond existing theory. Maybe no further theory development is necessary, and GR is simply the true theory? In this case, it should always give reasonable answers. ##\rho = \infty## is unreasonable.

The situation with this in the blockworld is even worse than in the dynamical perspective. The dynamical perspective allows excluding the ##\rho = \infty## singularity in a quite simple way, almost automatically. The preferred background ist harmonic (not yet seen a reasonable alternative), in flat FLRW the comoving space coordinates are harmonic, but proper time is not, and harmonic time moves the ##\tau = 0## to ##t=-\infty## in the preferred time. In the blockworld, proper time is the true time, and the GR FLRW blockworld has, as a global object, the problem that it is geodesically incomplete, with infinities appearing in finite proper time. For the dynamical view, that global spacetime does not even exist, and at particular time slices, this is not a problem at all.
 
  • #78
PeterDonis said:
And if you want to ask @Elias1960 to justify his viewpoint, or email Wald to ask him to justify his, that's fine. But that doesn't explain why you keep asking me to justify a claim that I have never made.
I never claimed you did say that. I said IF you believe ..., then why? Your response should have been simply, "I don't believe ... ." Then we're in agreement that the flat, matter-dominated cosmology model does not have to include ##a = 0## with ##\rho = \infty##.
 
  • #79
RUTA said:
Wald

I'm a little confused about what you think Wald's position is. Wald describes the singularity theorems and what they show in Chapter 9, yes. And you have already agreed that cutting off a solution that, when maximally extended, has a singularity, before the singularity is reached, as you did with your version of the Einstein-de Sitter model, does not contradict the singularity theorems. So what, exactly, do you disagree with Wald about?
 
  • #80
PeterDonis said:
I'm a little confused about what you think Wald's position is. Wald describes the singularity theorems and what they show in Chapter 9, yes. And you have already agreed that cutting off a solution that, when maximally extended, has a singularity, before the singularity is reached, as you did with your version of the Einstein-de Sitter model, does not contradict the singularity theorems. So what, exactly, do you disagree with Wald about?
That the existence of past inextendable timelike or null geodesics is "pathological."
 
  • #81
RUTA said:
I said IF you believe ..., then why?

You didn't come across to me as saying "IF", but fine. I don't think ##\rho = \infty## is reasonable. But I also don't think that just arbitrarily cutting off a 4-D spacetime geometry is reasonable; I think a reasonable model has to include everything that can be included up to the maximal analytic extension. If the maximal analytic extension of a particular idealized model leads to ##\rho = \infty## somewhere, to me that's a reason for adjusting the model. Inflationary cosmology adjusts the model by changing the stress-energy tensor prior to the end of inflation to one that violates the energy conditions and therefore does not require ##\rho = \infty## anywhere.

RUTA said:
Then we're in agreement that the flat, matter-dominated cosmology model does not have to include ##a = 0## with ##\rho = \infty##.

I would say that a model which fixes the ##\rho = \infty## problem, by adjusting the stress-energy tensor prior to some spacelike hypersurface, is no longer a simple "flat, matter-dominated cosmology model"; it includes a region that is flat and matter-dominated, but that is not the entire model. (Note that in our best current model of our universe, the flat, matter-dominated region ends a few billion years before the present; our universe at present in our best current model is dark energy dominated, not matter dominated. So even the flat, matter-dominated region itself is an extrapolation; it's not what we currently observe.)
 
  • #82
RUTA said:
That the existence of past inextendable timelike or null geodesics is "pathological."

So, in other words, you think ##\rho = \infty## is unreasonable (and I agree), but you also think it's perfectly OK for a model to predict that some timelike observer's worldline can just suddenly cease to exist in the past, because it hits an "edge" of spacetime?
 
  • #83
PeterDonis said:
But I also don't think that just arbitrarily cutting off a 4-D spacetime geometry is reasonable; I think a reasonable model has to include everything that can be included up to the maximal analytic extension.
Why do you believe that?
 
  • #84
PeterDonis said:
So, in other words, you think ##\rho = \infty## is unreasonable (and I agree), but you also think it's perfectly OK for a model to predict that some timelike observer's worldline can just suddenly cease to exist in the past, because it hits an "edge" of spacetime?
Absolutely. What is wrong with that? I suspect we're getting to your dynamical bias.
 
  • #85
RUTA said:
Why do you believe that?

Because otherwise our model would predict that spacetime just ends for no reason. Unless you can give a reason, an adynamical reason, as I have asked you to do several times now, and you haven't.

RUTA said:
What is wrong with that?

That there's no reason for it. Unless you can give a reason. But you haven't.

RUTA said:
I suspect we're getting to your dynamical bias.

I have made no dynamical claims whatever. As I have repeatedly said, I am taking your blockworld viewpoint in which spacetime is a 4-D geometry that doesn't change or evolve, it just is. Asking for a reason does not mean asking for a dynamical reason. An adynamical reason would be fine. But you have given no reason.
 
  • #86
RUTA said:
Fine, but that does not in any way refute my point. Do you understand that fact?
No, I don't. I continue to think that your point is refuted. Maybe you formulate your point in a different way which makes it possible to understand that it is not refuted?
RUTA said:
There is no double standard here. I told you constraint-based explanation does not rule out causal explanation in my view.
But it makes it unnecessary. You claim to provide an explanation for the violation of the Bell inequality where no causal explanation is possible, as proven in a theorem, not?

So, the double standard does not disappear if you only allow causal explanations. Causal explanations are required in science. The question is if you will be satisfied if no causal information is given. In fundamental physics you are (once you give no causal explanations for Bell inequality violations). But what about the tobacco industry lobby not seeing a necessity to give causal explanations for lung cancer correlations? What about the creationist not seeing a necessity to explain causally dinosaur bones? What about the astrologer who refused to give any causal explanations of how the position of Venus at the date of your birth influences your fate?
 
  • #87
PeterDonis said:
Because otherwise our model would predict that spacetime just ends for no reason. Unless you can give a reason, an adynamical reason, as I have asked you to do several times now, and you haven't.
I've repeated many times that you only need to keep that part of M that you believe can or can conceivably produce empirically verifiable results. Every part of M fits self-consistently with every other part of M via EEs. Only a dynamical thinker believes some part of M needs to be explained independently from it fitting coherently into the whole of M. That's where you're coming from and that's why you keep believing I haven't answered your question. You're thinking dynamically.
 
  • #88
Elias1960 said:
No, I don't. I continue to think that your point is refuted. Maybe you formulate your point in a different way which makes it possible to understand that it is not refuted?
Your objections have not in any way refuted my claim as I've stated it many times as clearly as I know how. Sorry, I can't help you further.

Elias1960 said:
But it makes it unnecessary. You claim to provide an explanation for the violation of the Bell inequality where no causal explanation is possible, as proven in a theorem, not?
Read very carefully what I claimed in the Insight. I warn the reader that if they are unwilling or unable to accept the adynamical constraints as explanatory without a corresponding dynamical counterpart, then they will not believe I have explained the violation of Bell's inequality. That is the case for you. But, if you do accept the premise, then the conclusion (Bell's inequality has been explained) follows as a matter of deductive logic.

Elias1960 said:
So, the double standard does not disappear if you only allow causal explanations. Causal explanations are required in science.
That is your belief. I'm saying, "look at what you get if you accept that the constraints we have in physics are explanatory even in the absence of causal mechanisms." You don't believe they are, fine. But, that does not refute my claim. Again, I don't know how to state my point any more clearly than that. Sorry.
 
  • #89
RUTA said:
That's where you're coming from

No, you don't understand where I'm coming from. Let me try to get at the issue I see another way.

RUTA said:
I've repeated many times that you only need to keep that part of M that you believe can or can conceivably produce empirically verifiable results.

In the particular case of the Einstein-de Sitter model, as far as I can tell, to you this means: cut off the model at some spacelike hypersurface before it reaches ##\rho = \infty##. But how close to ##\rho = \infty## can I get before I cut the model off? Your cutoff procedure left a finite range of time (from ##t = 0## to ##t = - B## in your modified model) between the edge of the model and the problematic ##\rho = \infty## point. Could I make an equally viable model by taking, say, ##B / 2## instead of ##B## as the constant in the model?

If your answer is yes, your procedure does not lead to a unique model; taken to its logical conclusion, it ends up being the same as the standard Einstein-de Sitter model, since that model does not consider the ##\rho = \infty## point to be part of the manifold in any case, it's just a limit point that is approached but never reached.

If your answer is no, then you need to give a reason for picking the particular value ##B## as the constant in your model, instead of something else. So far I have not seen you give one.

By contrast, my response to the fact that the Einstein-de Sitter model predicts ##\rho = \infty## at some particular limit point is to look for an alternate model that does not have that property, by taking an Einstein-de Sitter region, just like the one in your model, and joining it to another region, such as an inflationary region, that does not predict ##\rho = \infty## anywhere. You appear to think that any such extension is driven by a "dynamical" viewpoint, but I don't see why that must be the case. I think the desire to have a model that has no arbitrary "edges" where spacetime just stops for no reason, is a valid adynamical desire. You appear to disagree, but I can see no reason why you should, and you have not given any reason for why you do.

RUTA said:
Only a dynamical thinker believes some part of M needs to be explained independently from it fitting coherently into the whole of M.

This has nothing to do with my issue. I am not asking you to explain the Einstein-de Sitter region in your model independently from fitting it into a larger model. I am asking why you have no larger model: why you just have the Einstein-de Sitter region and nothing else, when that region is not fitted coherently into any larger model, it's just sitting there with an obvious edge that, as far as I can see, has no reason for being there. If you think that region all by itself, with its edge, is a coherent whole adynamical model, I would like you to explain why. Just saying "oh, you're thinking dynamically so you just don't understand" doesn't cut it.
 
  • #90
RUTA said:
Every part of M fits self-consistently with every other part of M via EEs.

In your version of the Einstein-de Sitter model, there is only one part of M, the Einstein-de Sitter region with your arbitrary cutoff. So in your model, there is nothing to fit self-consistently with. But there certainly could be: you could, for example, fit your Einstein-de Sitter region self-consistently via EEs with an inflationary region, just as inflationary models do. Why didn't you?
 
  • #91
I answered your question many times, the cutoff is not at all arbitrary, you keep whatever you can argue generates conceivable empirical results. I didn't say you can't keep inflationary models. I personally think they're not interesting as articulated by Paul Steinhardt, but others may want to pursue them.
 
  • #92
Again, the only difference between what I'm claiming and what exists in the common textbook explanations of GR cosmology is that the textbooks say GR cosmology models are problematic because they're "singular" in the sense that they entail ##\rho = \infty##. And I'm saying adynamical explanation does not force those solutions to entail that region. Adynamical explanation allows you to simply omit the problematic region if it is beyond empirical confirmation. So far that's true, the model is working great despite the fact that a purely mathematical extrapolation produces ##\rho = \infty##. So why worry about that region?
 
  • #93
RUTA said:
the cutoff is not at all arbitrary, you keep whatever you can argue generates conceivable empirical results

Then the Einstein-de Sitter model would be valid for any ##t > 0##, since it predicts finite and positive density and scale factor. So are you saying I could use any value of ##B## I like in your modified version of the model, putting the cutoff wherever I want, as long as it doesn't include the ##\rho = \infty## point?

RUTA said:
I didn't say you can't keep inflationary models.

Ok, that helps to clarify your viewpoint.

RUTA said:
Adynamical explanation allows you to simply omit the problematic region if it is beyond empirical confirmation

But ##\rho = \infty## isn't a "region", it's a point. And that point is not even included in the manifold; as I said, it's a limit point that's approached but never reached. So again, I don't see what is wrong with the standard Einstein-de Sitter model, where ##B = 0## in your modified formula, if any finite value of ##\rho## is ok.
 
  • #94
RUTA said:
the textbooks say GR cosmology models are problematic because they're "singular" in the sense that they entail ##\rho = \infty##.

No, that's not what they say. What they say (Wald, for example) is that spacetime curvatures which are finite but larger than the Planck scale are problematic for a classical theory of gravity like GR, because we expect quantum gravity effects to become important at that scale. In the Einstein-de Sitter model, for example, the curvature becomes infinite at the point you have been labeling ##\rho = \infty##, not just the density. And in Schwarzschild spacetime, the curvature is the only thing that becomes infinite at the singularity at ##r = 0##, because it's a vacuum solution and the stress-energy tensor is zero everywhere. But that singularity is just as problematic on a viewpoint like Wald's.
 
  • #95
PeterDonis said:
No, that's not what they say. What they say (Wald, for example) is that spacetime curvatures which are finite but larger than the Planck scale are problematic for a classical theory of gravity like GR, because we expect quantum gravity effects to become important at that scale. In the Einstein-de Sitter model, for example, the curvature becomes infinite at the point you have been labeling ##\rho = \infty##, not just the density. And in Schwarzschild spacetime, the curvature is the only thing that becomes infinite at the singularity at ##r = 0##, because it's a vacuum solution and the stress-energy tensor is zero everywhere. But that singularity is just as problematic on a viewpoint like Wald's.
Yes, the curvature is also problematic as Wald points out in Chapter 9. The ##\rho = \infty## is also a problem for Schwarzschild at ##r = 0## because that's where M is. Am I missing something there?
 
  • #96
PeterDonis said:
But ##\rho = \infty## isn't a "region", it's a point. And that point is not even included in the manifold; as I said, it's a limit point that's approached but never reached. So again, I don't see what is wrong with the standard Einstein-de Sitter model, where ##B = 0## in your modified formula, if any finite value of ##\rho## is ok.
Well, it's difficult to say how "big" ##a = 0## is because the spatial hyper surfaces to that point are ##\infty##. Its "size" is undefined so I was being careful with my language.
 
  • #97
PeterDonis said:
Then the Einstein-de Sitter model would be valid for any ##t > 0##, since it predicts finite and positive density and scale factor. So are you saying I could use any value of ##B## I like in your modified version of the model, putting the cutoff wherever I want, as long as it doesn't include the ##\rho = \infty## point?
Yes, and you could even use ##\rho = \infty## if you could produce empirical verification. Use whatever you need, just don't dismiss the model because you believe an empirically unverifiable mathematical extrapolation leads to "pathologies."
 
  • #98
RUTA said:
The ρ=∞ρ=∞ is also a problem for Schwarzschild at r=0r=0 because that's where M is.

There is no "where M is" in the Schwarzschild solution; it's a vacuum solution with zero stress-energy everywhere and no ##\rho = \infty## (and for that matter no ##\rho \neq 0##) anywhere. Also ##r=0## is not even part of the manifold; it's a limit point that is approached but never reached, so it can't be "where" anything is.

M in the Schwarzschild solution is a global property of the spacetime; there is no place "where it is".
 
  • #99
RUTA said:
don't dismiss the model because you believe an empirically unverifiable mathematical extrapolation leads to "pathologies."

Are you claiming that Wald is "dismissing" the Einstein-de Sitter model or similar models on these grounds? I don't see him dismissing it at all. I just see him (and MTW, and every other GR textbook I've read that discusses this issue) saying that any such model will have a limited domain of validity; we should expect it to break down in regions where spacetime curvature at or greater than the Planck scale is predicted.
 
  • #100
RUTA said:
it's difficult to say how "big" ##a = 0## is

But however "big" it is, it does not extend to any value of ##t## greater than zero in the standard FRW models. That's the point I was making.
 
Back
Top