Hurkyl said:
A trivial theory is still a theory -- aesthetic grounds are not sufficient justification for rejecting it.
Not aesthetic grounds-- the grounds would be the
definition of what a theory is.
And besides the 'database theory' is the only theory (up to equivalence) that makes no assertions beyond the experimental data.
There have to be some defining assumptions that theories make, such as objectivity and repeatability. These can never be proven, only falsified. That is the important kind, the "bridge-building" kind, of
predictions that theories must make. These are of the "weather prediction" kind, and are the useful predictions, the bridge-building predictions, that science makes. To make predictions of that nature, there is no need to pretend theories are things that they are not.
Nevertheless, that kind of prediction is often (unfortunately) viewed as a trivial aspect of a theory-- people sometimes treat theories as if their value (erroneously) is their ability to predict outside the box of the core assumptions that define what a theory is. Those latter kinds of "predictions" are really just guesses, a way to
extend a theory that, once tested, form a means to create new theories, i.e., they become predictions of the important kind. One doesn't
need a theory to form a hypothesis, though they can be a helpful guide if we need one. Unfortunately, the latter gets all the attention, despite being extraneous to the value of science, and results in all kinds of misconceptions about what science is and what you can use it for (not to mention a list of "revolutions" in scientific thinking-- rather than just big discoveries, which is all they really are).
But they can be singled out: an observer is inertial if and only if his worldline is straight,
That is circular reasoning, you simply define straight that way. All we can say is their accelerometers read zero, if we want to think of that as special that's up to us-- there's no need to go and build physics around it.
and it's an easy theorem that null vectors have 'speed' one in any orthonormal affine coordinate chart.
At last we see the appearance of the word "orthonormal", which I've been hammering for awhile now.
The theory of special relativity, like any other theory, is formulation independent: you get the same theory no matter how you formulate it. e.g. if you formualte it in terms of inertial observers and Poincaré-invariant coordinate metrics, you get exactly the same theory as if you formulate it in terms of a coordinate-independent metric with a specified signature.
I remain unconvinced of that, and this is an important purpose of the thread. The key thing I have maintained is not that SR makes false predictions for quantitative measurements within the regime where it has been tested, nor that it is unable to predict the dynamics of any particle with a known proper acceleration that satisfies certain other assumptions (as are necessary in either classical physics or Dirac's formulation of quantum mechanics). Rather, its problems are pedagogical, in that it may make unnecessary guesses that could prove to be false in future experiments outside the realm where it has been tested. Such false "predictions" are not an important part of any theory, just as it was not an important part of Newton's laws that they work to arbitrary speeds (and the fact that they don't has in no way compromised their use in situations where they are warranted).
The pedagogical problems of special relativity include the fact that its postulates cannot be applied from the reference frame of an accelerated observer. Also, they imply choices about how we picture reality that are not supported, they are merely assumed. As such, it generates explanations for "why things happen the way they do" that are inconsistent between observers. A classic example is, what is the cause of a blueshift between two rockets in free space. If we take Einstein's convention for "stationary" meaning the frame of any inertial observer describing their universe, then the cause of blueshift observed by an inertial observer is always the squeezing of the wavelength due to the motion of the source, coupled with time dilation of the source. However, a more flexible interpretation of the "cause" of that phenomenon is that the wave period simply depends on the proper time of any receiver on any path that connects the path between the absorption of the prior wavecrest and the following wavecrest (calculus could make that even more precise). That accounts for everything, we do not need either of the two "postulates of special relativity" to perform that calculation, we need only the signature of the metric and the conventions by which the observer measures time (i.e., they will ultimately ratio the period of a wave to the period of a clock).
The rest is pure language and arbitrary picture/coordinates, and does not belong as part of the postulates of a theory. Once again, where you will see the problem with the latter is when some observation contradicts those postulates, and we'll ask, "but why did we expect the postulates to hold, based on the database we already had?" The answer to that will be, "there was no reason, we were deluding ourselves".
Tomorrow is a new regime too.
Yes, but all that goes right into the
definition of a theory, as I alluded to above. We do not need to add special postulates to handle that,
it is in all scientific theories from the start. This is my point, the importance of understanding what aspects of our theory are there because that's how we define scientific theories, what aspects are there because they unify existing observations, what parts are extensions that we are curious about testing and have no idea if they will work or not (like Newton and arbitrary speed), and what parts are just pure fantasy (like MWI) that we have no reason whatsoever to ever pass a falsifiable test.
The confidence afforded to us by the scientific method.
But I still don't know which of the two versions of "confidence" you mean. I would say the confidence afforded to us by the scientific method is of the
first kind I listed, but you seem to be talking about the
second situation.
The point is, before we had evidence contradicting the former, it was scientifically correct to favor the "globally Minkowski" hypothesis over the "locally Minkowski" hypothesis. Why was that scientifically correct?
It wasn't, any more than it was "scientifically correct" to think Newton's laws would extend to arbitrary speed, or that Ptolemy's model would hold up to more precise observations. The only things that are scientifically correct are to expect predictions "within the box" of the current dataset to work, that's like predicting the weather or building a bridge. Other types of predictions are called "guesses", and are not scientifically correct to expect to work (a point history has been rather clear on, especially once you bear in mind that "the winners write the history").
Because the "globally Minkowski" hypothesis had stronger empirical support.
No, it had no empirical support (even in the absence of gravity), as it was only formulated and tested for inertial observers. Indeed, it breaks down when you leave that observational regime, as is not untypical of phyical theories.
Of course, with the evidence we now have, "locally Minkowski" has stronger empirical support.
If by that you mean that "global Minkowski is known to be wrong", I agree.
Huh? That has absolutely nothing to do with what I said in that quote.
I thought you were pointing out that -+++ is the same as +---. What is written is merely a re-affirmation of what I've been saying all along-- that the Minkowski metric is invariant only under the transformations of the Poincare group (and is
not invariant under arbitrary coordinate transformations or changes of observer, though its signature is).
That's what calculus is for.
If you want to use calculus to integrate the metric between events from the perspective of a constantly accelerated observer, you need to integrate the Rindler metric, not the Minkowski metric. The latter gives you the wrong answer, that's the point.