Global simultaneity surfaces - how to adjust proper time?

In summary: Yes, this is correct. But that is always true for any timelike curve provided your curve parameter is properly normalized. So from this fact alone you can't deduce anything useful about particular congruences of timelikes curves that have particular properties of interest,...
  • #36
PeterDonis said:
I think the issue is that, while ##\omega \wedge d \omega = 0## is equivalent to ##\omega = - h d t## at any given point, if we look at the entire spacetime, the ##h## and ##t## such that ##\omega = - h dt## might be different at different points. The synchronizable condition rules that out: ##h## and ##t## must be the same globally.
Functions ##h## and ##t## are actually defined in an open neighborhood ##U## of any given point of spacetime manifold. In the intersection of two overlapping neighborhoods those functions must be the same (i.e. they must coincide there). If we extend this result to all spacetime points then the two functions should be the same globally, I believe.
 
Last edited:
Physics news on Phys.org
  • #37
cianfa72 said:
In the intersection of two overlapping neighborhoods those functions must be the same
I'm not sure that's necessarily true. I think I've seen a discussion of this somewhere but I can't find a reference just now.

Does the textbook you referenced discuss this at all?
 
  • #38
PeterDonis said:
Does the textbook you referenced discuss this at all?
No, as far as I can tell no. However for the 'synchronizable' case Sachs and Wu explicitly say ##h>0##. IDK if that really makes the difference.
 
  • #39
cianfa72 said:
AFAIK ##\omega \wedge d\omega = 0## is equivalent (iff) to ##\omega = -hdt##.
Well, let's see.

If ##\omega = f dt## (I use ##f## instead of ##- h## to avoid the pesky minus sign while writing the following; of course we can always just set ##f = -h## at the end to match what your reference gives), then ##d \omega = df \wedge dt##, so ##\omega \wedge d \omega = 0## by inspection.

If ##\omega \wedge d \omega = 0##, then there must be some ##\alpha## such that ##d \omega = \alpha \wedge \omega##. Then we must have ##d d \omega = 0 = d \alpha \wedge \omega + \alpha \wedge d \omega##. Since ##\alpha \wedge d \omega = \alpha \wedge \alpha \wedge \omega = 0##, the second term in the sum vanishes identically; to make the first term vanish, we must have ##d \alpha = 0##.

Now we can see the issue that (I think) explains why locally synchronizable is not necessarily equivalent to synchronizable. ##d \alpha = 0## means that ##\alpha## is closed; but ##\omega = f dt## requires ##\alpha## to be exact, i.e., ##\alpha = df## [Edit: should be ##\alpha = d \ln f##, see post #51]. So ##\omega = f dt## is only equivalent to ##\omega \wedge d \omega = 0## under conditions where a form being closed is equivalent to it being exact. Since that is not always the case, we cannot always say ##\omega = f dt## is equivalent to ##\omega \wedge d \omega = 0##.
 
Last edited:
  • #40
PeterDonis said:
If ##\omega \wedge d \omega = 0##, then there must be some ##\alpha## such that ##d \omega = \alpha \wedge \omega##.
I tried to show ##\omega \wedge d \omega = 0 \Leftrightarrow d \omega = \alpha \wedge \omega## for some one-form ##\alpha##. The implication ##\Leftarrow## is true by ispection.

For the other ##\Rightarrow## I tried as follows: just to fix ideas consider a 4D manifold and take a one-form ##\omega## defined on it. Since at each point of the 4D manifold the vector space of one-forms has dimension 4 we can get a basis completing ##\omega## with 3 independent one-forms ##\delta,\beta,\gamma##.

##d\omega## is a 2-form hence, as generic 2-form, can be always given as
##d\omega = a_1\delta\wedge \omega + a_2\beta \wedge \omega + a_3\gamma \wedge \omega + a_4\beta \wedge \delta+ a_5\gamma \wedge \delta+ a_6\gamma \wedge \beta##

##\begin{align} d\omega \wedge \omega & = (a_1\delta\wedge \omega + a_2\beta \wedge \omega + a_3\gamma \wedge \omega + a_4\beta \wedge \delta+ a_5\gamma \wedge \delta+ a_6\gamma \wedge \beta) \wedge \omega \nonumber \\ & = a_4\beta \wedge \delta\wedge \omega + a_5\gamma \wedge \delta\wedge \omega + a_6\gamma \wedge \beta \wedge \omega = 0 \nonumber \end{align} ##

Since ##\beta \wedge \delta\wedge \omega, \gamma \wedge \delta\wedge \omega, \gamma \wedge \beta \wedge \omega## are independent we get ##a_4=a_5=a_6=0##.

That means ##d\omega = a_1\delta\wedge \omega + a_2\beta \wedge \omega + a_3\gamma \wedge \omega = (a_1\delta+ a_2\beta + a_3\gamma) \wedge \omega = \alpha \wedge \omega## being ##\alpha## the one-form ##\alpha = a_1\delta+ a_2\beta + a_3\gamma##.

Is that correct ?
 
  • #41
cianfa72 said:
I tried to show ##\omega \wedge d \omega = 0 \Leftrightarrow d \omega = \alpha \wedge \omega## for some one-form ##\alpha##.
I believe this is a special case of a general property of the wedge product, that if ##\omega \wedge \beta = 0##, ##\beta## must be of the form ##\alpha \wedge \omega## for some ##\alpha##. (Note that in the general case, it is possible that ##\alpha## is just a function, or 0-form, in which case ##\alpha \wedge \omega## is just an ordinary product of a function with a form. But this is ruled out in the special case under discussion since ##d \omega## is one rank higher than ##\omega##.)
 
  • #42
PeterDonis said:
I believe this is a special case of a general property of the wedge product, that if ##\omega \wedge \beta = 0##, ##\beta## must be of the form ##\alpha \wedge \omega## for some ##\alpha##.
Yes, and I believe I have given a proof of it (at least for the wedge product of 1-form and a 2-form).

PeterDonis said:
If ##\omega \wedge d \omega = 0##, then there must be some ##\alpha## such that ##d \omega = \alpha \wedge \omega##. Then we must have ##d d \omega = 0 = d \alpha \wedge \omega + \alpha \wedge d \omega##. Since ##\alpha \wedge d \omega = \alpha \wedge \alpha \wedge \omega = 0##, the second term in the sum vanishes identically; to make the first term vanish, we must have ##d \alpha = 0##.
But...to make the first term vanish again ##d\alpha = \beta \wedge \omega## for some one-form ##\beta##, don't you ?
 
  • #43
cianfa72 said:
to make the first term vanish again ##d\alpha = \beta \wedge \omega## for some one-form ##\beta##, don't you ?
##d \alpha = 0## will certainly make the first term vanish.

Formally, ##d \alpha = \beta \wedge \omega## is another possibility to make the first term vanish, yes. However, I believe it is ruled out because continuing to apply ##dd = 0## leads to an infinite regress. I suspect there is a more straightforward way of ruling out that possibility but I have not been able to find one.
 
  • #44
cianfa72 said:
I believe I have given a proof of it
Your "proof" looks to me like a circular argument, since this step...

cianfa72 said:
Since ##\beta \wedge \delta\wedge \omega, \gamma \wedge \delta\wedge \omega, \gamma \wedge \beta \wedge \omega## are independent we get ##a_4=a_5=a_6=0##.
...appears to amount to assuming what is to be proved, namely that all parts of ##d \omega## that are linearly independent of ##\omega## must vanish.
 
  • #45
PeterDonis said:
...appears to amount to assuming what is to be proved, namely that all parts of ##d \omega## that are linearly independent of ##\omega## must vanish.
Why ? One-forms ##\omega, \delta, \beta, \gamma## are linear independent by construction (starting from ##\omega## we build a basis completing it with 3 independent one-forms).

By construction ##\beta \wedge \delta \wedge \omega, \gamma\wedge \delta \wedge \omega, \gamma \wedge \beta \wedge \omega## are 3 independent 3-forms (the dimension of 3-forms vector space in 4 dimension is ##\binom 4 3 = 4##). The coefficients of the unique linear combination to get the null 3-form must vanish.
 
  • #46
cianfa72 said:
Why ?
I'm not saying the forms you say are linearly independent aren't. I'm just saying that saying ##d \omega = \alpha \wedge \omega## is saying the same thing as "there are no non-vanishing components of ##d \omega## that are linearly independent of ##\omega##". Saying the latter is not "proving" the former; it's just restating it.
 
  • #47
PeterDonis said:
I'm not saying the forms you say are linearly independent aren't. I'm just saying that saying ##d \omega = \alpha \wedge \omega## is saying the same thing as "there are no non-vanishing components of ##d \omega## that are linearly independent of ##\omega##". Saying the latter is not "proving" the former; it's just restating it.
Sorry, not sure to get your point. To me it seems to have proved that if by hypothesis ##d\omega \wedge \omega=0## then necessarily ##d\omega = \alpha \wedge \omega## for some one-form ##\alpha##.

Note that the first 3 terms involving ##a_1, a_2, a_3## actually vanish since in the wedge product there are two occurrence of ##\omega##.
 
Last edited:
  • #48
cianfa72 said:
To me it seems to have proved that if by hypothesis ##d\omega \wedge \omega=0## then necessarily ##d\omega = \alpha \wedge \omega## for some one-form ##\alpha##.
If it helps you to understand what's going on, that's fine. But saying that ##d \omega \wedge \omega = 0## is already saying that ##d \omega## and ##\omega## are not linearly independent; that's what a wedge product vanishing means. Saying that ##d \omega = \alpha \wedge \omega## for some ##\alpha## is just another way of saying that ##d \omega## is not linearly independent of ##\omega##. No further proof is required.
 
  • #49
PeterDonis said:
Saying that ##d \omega = \alpha \wedge \omega## for some ##\alpha## is just another way of saying that ##d \omega## is not linearly independent of ##\omega##. No further proof is required.
Ah ok, nevertheless my proof should be correct I believe.
 
Last edited:
  • #50
PeterDonis said:
##d \alpha = 0## means that ##\alpha## is closed; but ##\omega = f dt## requires ##\alpha## to be exact, i.e., ##\alpha = df##. So ##\omega = f dt## is only equivalent to ##\omega \wedge d \omega = 0## under conditions where a form being closed is equivalent to it being exact. Since that is not always the case, we cannot always say ##\omega = f dt## is equivalent to ##\omega \wedge d \omega = 0##.
Coming back to your post #39, you mean since from ##\omega = f dt## we get ##d\omega = df \wedge dt## then the latter is in the form ##d\omega = \alpha \wedge \omega/f## in which ##\alpha = df## is exact, nevertheless it is not in the form ##d\omega = \alpha \wedge \omega##.
 
  • #51
cianfa72 said:
Coming back to your post #39, you mean since from ##\omega = f dt## we get ##d\omega = df \wedge dt## then the latter is in the form ##d\omega = \alpha \wedge \omega/f## in which ##\alpha = df## is exact, nevertheless it is not in the form ##d\omega = \alpha \wedge \omega##.
No. If ##\omega = f dt##, then ##d \omega = df \wedge dt##, which is in the form ##d \omega = \alpha \wedge \omega## with ##\alpha = df / f = d g## with ##g = \ln f##, so ##\alpha## is exact.
 
  • #52
PeterDonis said:
No. If ##\omega = f dt##, then ##d \omega = df \wedge dt##, which is in the form ##d \omega = \alpha \wedge \omega## with ##\alpha = df / f = d g## with ##g = \ln f##, so ##\alpha## is exact.
Ah ok. ##d\omega = df \wedge dt = f/f df \wedge dt = df /f \wedge fdt = \alpha \wedge \omega##
 
  • #53
##\omega \wedge d\omega = 0##, seems kind of trivially true, no? If ##\omega## is a one form, then it is in the form of ##\omega = h dx## where ##h,dx## are just place holders. By product rule, i get that ##d\omega = dh \wedge dx + h d^2x## since d^2 = 0, the 2nd term vanishes. thus, we're left with ##\omega \wedge d \omega = h dx \wedge dh \wedge dx \rightarrow -h dx \wedge dx \wedge dh = 0##

Another interesting thing is that ##\omega \wedge d\omega = d\omega \wedge \omega## because, in general, ##\alpha \wedge \beta = (-1)^{pq} \beta \wedge \alpha## for p,q form ##\alpha, \beta## Now, this is where ##\omega \wedge \omega = 0## comes from, if ##\omega## is a one form. You'd get ##\omega \wedge \omega = -\omega \wedge \omega## which is only true if it's zero.

Conclusion I make: The only times ##\omega \wedge d\omega \neq 0## FOR ONE FORMS ##\omega## is if you have a one form where ##\omega \wedge \omega \neq 0## but I'd have to see an example of that (if it exists).
 
  • #54
romsofia said:
##\omega \wedge d\omega = 0##, seems kind of trivially true, no?
No.

romsofia said:
If ##\omega## is a one form, then it is in the form of ##\omega = h dx##
No. Not all 1-forms can be expressed in this way.

romsofia said:
Another interesting thing is that ##\omega \wedge d\omega = d\omega \wedge \omega## because, in general, ##\alpha \wedge \beta = (-1)^{pq} \beta \wedge \alpha## for p,q form ##\alpha, \beta##
No. You left out a minus sign. Since the rank of ##d \omega## is one greater than the rank of ##\omega##, we have that one of ##p, q## is even and one is odd, so their product is odd and ##-1^{pq} = -1##, not ##+1##.

romsofia said:
Conclusion I make: The only times ##\omega \wedge d\omega \neq 0## FOR ONE FORMS ##\omega## is if you have a one form where ##\omega \wedge \omega \neq 0##
It is impossible to have ##\omega \wedge \omega \neq 0## for any form; the wedge product of any form with itself vanishes by the definition of wedge product. However, this does not mean ##\omega \wedge d \omega \neq 0## is impossible, because all the other claims you have made, on which your conclusion here is based, are also wrong.
 
Last edited:
  • #55
PeterDonis said:
No. You left out a minus sign. Since the rank of ##d \omega## is one greater than the rank of ##\omega##, we have that one of ##p, q## is even and one is odd, so their product is odd and ##-1^{pq} = -1##, not ##+1##.
Even times odd is always even, so it would be what I said it was. ##\omega## is one form, ##d\omega## is a two form, thus ##\omega \wedge d\omega = (-1)^{1*2} d\omega \wedge \omega = d\omega \wedge \omega##, but that's besides the point, I didn't think this problem through.

I was sloppy with my one form, as I thought it was trivial, but you're right, it isn't trivial. I looked into my notes, and you can find a discussion on this in full detail in "Lectures on differential geometry" by Shlomo Steinberg which my notes have a mixing between calling ##\omega \wedge d\omega = 0## an integrability condition, then diving into "Darboux's theorem" starting at page 130 in this book (or i guess, just go to wiki: https://en.wikipedia.org/wiki/Darboux's_theorem , https://en.wikipedia.org/wiki/Integrability_conditions_for_differential_systems)

So yes, my conclusion was wrong, sorry OP, I should have thought it through more!

An example that may be helpful for OP since you're looking into integrable systems:

Suppose you wanted find the necessary and sufficient conditions for a scalar function ##e^\phi## to be a integrating for some one form ##\omega##. This means you're looking for ##d(e^\phi \omega) = 0## using our integrability condition ##\omega \wedge d\omega = 0##, and expanding ##d(e^\phi \omega) = d\omega + d\phi \wedge \omega = 0## If we put ##\omega## in "normal form" which looks like ##\omega = x^1dx^2+x^3dx^4+...+dx^n##. We can conclude that there can't be a ##dx^n## term because then ##\omega \wedge d\omega## would have random terms like ##dx^n \wedge dx^1 \wedge dx^2##. We also conclude that it can't have more than a single monomial because we'd have random terms like ##x^1 dx^2 \wedge dx^3 \wedge dx^4##

So, we can say that ##\omega = x^1 dx^2## and find the integrating factor to be ##\frac{1}{x^1}##

(I believe this example comes from "Applied differential geometry" by Burke)
 
  • #56
Some interesting lecture notes I found on these theorems, chapter 4 is where you should start, but lecture 21 onward seems the most fruitful if looking for a global application:

https://stanford.edu/~sfh/257A.pdf

I'll take a look at them deeper in the future, but maybe this can help.
 
  • #57
romsofia said:
Even times odd is always even
Oops, yes, you're right.
 
  • #58
I think the take-home point is the following: Global Frobenius says that the condition ##\omega \wedge d\omega = 0## at every point ##p## of spacetime manifold is equivalent to the existence of maximal integrating immersed submanifolds foliating the overall manifold -- i.e. the entire spacetime manifold can be foliated by the union of those (disjoint) integrating submanifolds (leaves). *Locally* there exist two smooth functions ##f## and ##g## such that ##\omega = f dg##. That means locally (i.e. in an open neighborhood of point ##p##) the level sets of ##g## are the maximal integrating immersed submanifolds around ##p##.

However the point to make clear is that the set of integrating submanifolds may not be given as the level sets of a global smooth function *alone* defined on the overall manifold.
 
Last edited:
  • Like
Likes romsofia
  • #60
PeterDonis said:
@cianfa72 do Sachs & Wu give any examples of frames that meet the various conditions?
See the relevant pages about synchronization frames. Here ##Z## is a vector field such that ##g(Z,Z)=-1## everywhere (i.e. it defines a unit timelike congruence that fills the entire spacetime) hence ##\xi## is the one-form ##\xi = g(Z, \_)##.

'Locally proper time synchronizable' does mean irrotational (i.e. zero vorticity ##d\xi \wedge \xi=0##) & geodesic (so the two conditions actually amount to ##d\xi=0##) while 'proper time synchronizable' requires a global function ##t## such that ##\xi=-dt##. Since the Frobenius theorem, the difference between the above two is really the existence of a global function ##t## from which get the orthogonal spacelike hypersurface foliation.

Sachs & Wu in section 5.3 claim that if the congruence is proper time synchronizable then the observers it represents can synchronize their wristwatches using a radar procedure (basically Einstein synchronization procedure) -- see also the PF thread referenced in the OP.
 

Attachments

  • GR for mathematicians - p133-137.pdf
    211 KB · Views: 60
Last edited:
  • #61
cianfa72 said:
See the relevant pages about synchronization frames.
The pages you attached talk about light signals and synchronization using them. They don't give any specific examples of frames that have the various properties. The only examples given of specific spacetimes are of light signals in those spacetimes,not frames.
 
  • #62
PeterDonis said:
The pages you attached talk about light signals and synchronization using them. They don't give any specific examples of frames that have the various properties. The only examples given of specific spacetimes are of light signals in those spacetimes, not frames.
Yes, there are not specific examples of such frames.

About the synchronization procedure employing light signals, I have not a clear understanding where the hypersurface orthogonal condition enters in such synchronization procedure.
 
  • #63
cianfa72 said:
I have not a clear understanding where the hypersurface orthogonal condition enters in such synchronization procedure.
Exercise 5.2.6 seems to be relevant to this.
 
  • #64
PeterDonis said:
Exercise 5.2.6 seems to be relevant to this.
Ah ok, basically you are saying the curve ##\alpha: [0,a) \rightarrow W## can be understood as one of the geodesics that lies on the spacelike hypersurfaces orthogonal to the given geodesic ##\gamma## in the congruence.

BTW, I believe the condition for 'proper time synchronizable' (i.e. there is a global function ##t## defined on the entire spacetime such that ##\xi = - dt##) actually amounts to the existence of a coordinate chart such that the metric tensor ##g_{\mu \nu}## has components ##g_{00}=1## and ##g_{0 \alpha}=0## for ##\alpha=1,2,3##.
 
Last edited:
  • #65
cianfa72 said:
the curve ##\alpha: [0,a) \rightarrow W## can be understood as one of the geodesics that lies on the spacelike hypersurfaces orthogonal to the given geodesic ##\gamma## in the congruence.
That's how I read it, yes.

cianfa72 said:
I believe the condition for 'proper time synchronizable' (i.e. there is a global function ##t## defined on the entire spacetime such that ##\xi = - dt##) actually amounts to the existence of a coordinate chart such that the metric tensor ##g_{\mu \nu}## has components ##g_{00}=1## and ##g_{0 \alpha}=0## for ##\alpha=1,2,3##.
I believe that's correct, yes. If we generalize to allow ##g_{00}## to be something other than ##1## but still require ##g_{0 \alpha} = 0##, then this is the condition for synchronizable.
 
  • #66
PeterDonis said:
I believe that's correct, yes.
I say "I believe", but actually it's easy to verify.

If we have a global function ##t## such that ##\xi = - dt##, then we can easily construct a coordinate chart meeting the conditions: ##x^0 = t## and choose the three ##x^i## (for ##i = 1, 2, 3##) such that ##\partial / \partial x^i## is orthogonal to ##\partial / \partial t##. Then ##\xi = - dt## ensures that ##g_{00} = 1## (because ##t## must be equal to proper time along integral curves of ##\xi##) and the orthogonality condition just mentioned ensures that ##g_{0 i} = 0##.

If we have a coordinate chart meeting the conditions, then we choose ##\xi = - dx^0## and we can see that ##x^0## is the global function ##t## that we are looking for.

If we generalize to ##\xi = - h dt## for some function ##h##, then we can verify the more general statement I made about a synchronizable congruence; it will turn out that ##h = \sqrt{|g_{00}|}##.
 
  • Like
Likes cianfa72
  • #67
PeterDonis said:
If we have a global function ##t## such that ##\xi = - dt##, then we can easily construct a coordinate chart meeting the conditions: ##x^0 = t## and choose the three ##x^i## (for ##i = 1, 2, 3##) such that ##\partial / \partial x^i## is orthogonal to ##\partial / \partial t##. Then ##\xi = - dt## ensures that ##g_{00} = 1## (because ##t## must be equal to proper time along integral curves of ##\xi##) and the orthogonality condition just mentioned ensures that ##g_{0 i} = 0##.
ok, so the coordinate chart we get in this way is actually a global coordinate chart, I believe.
 
  • #68
cianfa72 said:
the coordinate chart we get in this way is actually a global coordinate chart, I believe.
Since this method of obtaining a chart only works for the synchronizable and proper time synchronizable cases, yes, it will be global since those cases require that the conditions are globally valid.
 
  • #69
PeterDonis said:
Then ##\xi = - dt## ensures that ##g_{00} = 1## (because ##t## must be equal to proper time along integral curves of ##\xi##)
Just to do the complete calculation. We start with a unit timelike vector field ##Q## i.e. ##g(Q,Q)=-1##. Then ##\xi## is defined by ##\xi= g(Q, \_)##. Our goal is integrate the one-form ##\xi=-dt## over the integral orbits of ##Q##.

Assume ##\gamma: I \rightarrow M## is an orbit parametrization based on proper time ##u##. Now the pullback of ##\xi## by ##\gamma## is exactly ##-du##, so the integral of ##\xi## along any integral orbit ##\gamma## is ##-\Delta u=\int_{\gamma} \xi =- \Delta t##. On the other hand along any integral orbit we get ##du = \sqrt g_{00} dt## hence ##g_{00}=1##.

For a 'synchronizable' congruence we get ##- \Delta u = \int_{\gamma} \xi =- \int_{t_1}^{t_2} hdt## hence ##h=\sqrt g_{00}##.

Is the above correct ? Thank you.
 

Similar threads

  • Special and General Relativity
2
Replies
58
Views
2K
  • Special and General Relativity
Replies
12
Views
1K
Replies
4
Views
2K
  • Special and General Relativity
Replies
6
Views
964
  • Special and General Relativity
Replies
17
Views
3K
  • Special and General Relativity
3
Replies
78
Views
4K
Back
Top